vllm/vllm
2024-01-15 15:43:59 -08:00
..
core fix: deque mutated during iteration in abort_seq_group (#2371) 2024-01-12 17:44:18 +01:00
engine [DOC] Add additional comments for LLMEngine and AsyncLLMEngine (#1011) 2024-01-11 19:26:49 -08:00
entrypoints Allow setting fastapi root_path argument (#2341) 2024-01-12 10:59:59 -08:00
model_executor fix weigit loading for GQA with TP (#2379) 2024-01-15 15:43:59 -08:00
transformers_utils [Minor] Delete Llama tokenizer warnings (#2146) 2023-12-16 22:05:18 -08:00
worker [Minor] Optimize cuda graph memory usage (#2437) 2024-01-14 18:40:51 +01:00
__init__.py Bump up to v0.2.7 (#2337) 2024-01-03 17:35:56 -08:00
block.py [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
config.py Enable CUDA graph for GPTQ & SqueezeLLM (#2318) 2024-01-03 09:52:29 -08:00
logger.py [Fix] Fix duplicated logging messages (#1524) 2023-10-31 09:04:47 -07:00
outputs.py docs: add description (#1553) 2023-11-03 09:14:52 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Minor] Fix typo and remove unused code (#2305) 2024-01-02 19:23:15 -08:00
sequence.py [FIX] Fix class naming (#1803) 2023-11-28 14:08:01 -08:00
utils.py get_ip(): Fix ipv4 ipv6 dualstack (#2408) 2024-01-10 11:39:58 -08:00