vllm/vllm
2023-06-26 11:16:13 -07:00
..
core Add comments on swap space (#154) 2023-06-18 11:39:35 -07:00
engine [Bug] Fix the OOM condition for CPU cache (#260) 2023-06-26 11:16:13 -07:00
entrypoints [Bugfix] Fix a bug in RequestOutput.finished (#202) 2023-06-22 00:17:24 -07:00
model_executor Compatible with Decapoda Research llama hf version (#251) 2023-06-26 09:23:57 -07:00
worker [Bug] Fix the OOM condition for CPU cache (#260) 2023-06-26 11:16:13 -07:00
__init__.py Bump up version to 0.1.1 (#204) 2023-06-22 15:33:32 +08:00
block.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
config.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
logger.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
outputs.py [Fix] Better error message when there is OOM during cache initialization (#203) 2023-06-22 15:30:06 +08:00
sampling_params.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
sequence.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
utils.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00