vllm/vllm/engine
2024-09-12 14:06:51 -07:00
..
output_processor [Core] Optimize Async + Multi-step (#8050) 2024-09-03 18:50:29 +00:00
__init__.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
arg_utils.py [Hotfix][Core][VLM] Disable chunked prefill by default and prefix caching for multimodal models (#8425) 2024-09-12 14:06:51 -07:00
async_llm_engine.py [misc] remove engine_use_ray (#8126) 2024-09-11 18:23:36 -07:00
async_timeout.py [Bugfix] AsyncLLMEngine hangs with asyncio.run (#5654) 2024-06-19 13:57:12 -07:00
llm_engine.py [Core] Add engine option to return only deltas or final output (#7381) 2024-09-12 12:02:00 -07:00
metrics_types.py [MISC] Add prefix cache hit rate to metrics (#7606) 2024-08-19 11:52:07 -07:00
metrics.py [MISC] Add prefix cache hit rate to metrics (#7606) 2024-08-19 11:52:07 -07:00
protocol.py [Core] Logprobs support in Multi-step (#7652) 2024-08-29 19:19:08 -07:00