vllm/vllm
2024-01-20 22:36:09 -08:00
..
core [Fix] Keep scheduler.running as deque (#2523) 2024-01-20 22:36:09 -08:00
engine [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
entrypoints refactor complemention api for readability (#2499) 2024-01-18 16:45:14 -08:00
model_executor Add group as an argument in broadcast ops (#2522) 2024-01-20 16:00:26 -08:00
transformers_utils [Minor] Delete Llama tokenizer warnings (#2146) 2023-12-16 22:05:18 -08:00
worker Simplify broadcast logic for control messages (#2501) 2024-01-19 11:23:30 -08:00
__init__.py Bump up to v0.2.7 (#2337) 2024-01-03 17:35:56 -08:00
block.py [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
config.py Enable CUDA graph for GPTQ & SqueezeLLM (#2318) 2024-01-03 09:52:29 -08:00
logger.py [Fix] Fix duplicated logging messages (#1524) 2023-10-31 09:04:47 -07:00
outputs.py docs: add description (#1553) 2023-11-03 09:14:52 -07:00
prefix.py fix: fix some args desc (#2487) 2024-01-18 09:41:44 -08:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Minor] Fix typo and remove unused code (#2305) 2024-01-02 19:23:15 -08:00
sequence.py fix: fix some args desc (#2487) 2024-01-18 09:41:44 -08:00
utils.py [Neuron] Add an option to build with neuron (#2065) 2024-01-18 10:58:50 -08:00