vllm/vllm
2023-12-17 01:56:16 -08:00
..
core [FIX] Fix formatting error 2023-11-29 00:40:19 +00:00
engine [Minor] Add more detailed explanation on quantization argument (#2145) 2023-12-17 01:56:16 -08:00
entrypoints [Minor] Add more detailed explanation on quantization argument (#2145) 2023-12-17 01:56:16 -08:00
model_executor Remove dependency on CuPy (#2152) 2023-12-17 01:49:07 -08:00
transformers_utils [Minor] Delete Llama tokenizer warnings (#2146) 2023-12-16 22:05:18 -08:00
worker Remove dependency on CuPy (#2152) 2023-12-17 01:49:07 -08:00
__init__.py Bump up to v0.2.5 (#2095) 2023-12-13 23:56:15 -08:00
block.py [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
config.py Temporarily enforce eager mode for GPTQ models (#2154) 2023-12-17 01:51:12 -08:00
logger.py [Fix] Fix duplicated logging messages (#1524) 2023-10-31 09:04:47 -07:00
outputs.py docs: add description (#1553) 2023-11-03 09:14:52 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py Add a flag to include stop string in output text (#1976) 2023-12-15 00:45:58 -08:00
sequence.py [FIX] Fix class naming (#1803) 2023-11-28 14:08:01 -08:00
utils.py Optimize model execution with CUDA graph (#1926) 2023-12-16 21:12:08 -08:00