vllm/vllm
2023-06-27 06:27:41 -07:00
..
core [BugFix] Fix a bug in counting running sequences (#266) 2023-06-26 13:09:02 -07:00
engine [Bug] Fix the OOM condition for CPU cache (#260) 2023-06-26 11:16:13 -07:00
entrypoints [Bugfix] Fix a bug in RequestOutput.finished (#202) 2023-06-22 00:17:24 -07:00
model_executor expand coverage of gpt2 model loading (#271) 2023-06-27 06:27:41 -07:00
worker [Bug] Fix the OOM condition for CPU cache (#260) 2023-06-26 11:16:13 -07:00
__init__.py Bump up version to 0.1.1 (#204) 2023-06-22 15:33:32 +08:00
block.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
config.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
logger.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
outputs.py [Fix] Better error message when there is OOM during cache initialization (#203) 2023-06-22 15:30:06 +08:00
sampling_params.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
sequence.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
utils.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00