vllm/vllm
2024-01-22 14:34:21 -08:00
..
core [Fix] Keep scheduler.running as deque (#2523) 2024-01-20 22:36:09 -08:00
engine Fix https://github.com/vllm-project/vllm/issues/2540 (#2545) 2024-01-22 19:02:38 +01:00
entrypoints migrate pydantic from v1 to v2 (#2531) 2024-01-21 16:05:56 -08:00
model_executor Add qwen2 (#2495) 2024-01-22 14:34:21 -08:00
transformers_utils [Minor] Delete Llama tokenizer warnings (#2146) 2023-12-16 22:05:18 -08:00
worker [Speculative decoding 2/9] Multi-step worker for draft model (#2424) 2024-01-21 16:31:47 -08:00
__init__.py Bump up to v0.2.7 (#2337) 2024-01-03 17:35:56 -08:00
block.py [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
config.py Enable CUDA graph for GPTQ & SqueezeLLM (#2318) 2024-01-03 09:52:29 -08:00
logger.py [Fix] Fix duplicated logging messages (#1524) 2023-10-31 09:04:47 -07:00
outputs.py docs: add description (#1553) 2023-11-03 09:14:52 -07:00
prefix.py fix: fix some args desc (#2487) 2024-01-18 09:41:44 -08:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Minor] Fix typo and remove unused code (#2305) 2024-01-02 19:23:15 -08:00
sequence.py fix: fix some args desc (#2487) 2024-01-18 09:41:44 -08:00
utils.py [Speculative decoding 2/9] Multi-step worker for draft model (#2424) 2024-01-21 16:31:47 -08:00