vllm/vllm
JohnSaxon bbe4466fd9
[Minor] Fix typo (#2166)
Co-authored-by: John-Saxon <zhang.xiangxuan@oushu.com>
2023-12-17 23:28:49 -08:00
..
core [FIX] Fix formatting error 2023-11-29 00:40:19 +00:00
engine [Minor] Fix typo (#2166) 2023-12-17 23:28:49 -08:00
entrypoints Add SSL arguments to API servers (#2109) 2023-12-18 10:56:23 +08:00
model_executor [Minor] Fix a typo in .pt weight support (#2160) 2023-12-17 10:12:44 -08:00
transformers_utils [Minor] Delete Llama tokenizer warnings (#2146) 2023-12-16 22:05:18 -08:00
worker Remove dependency on CuPy (#2152) 2023-12-17 01:49:07 -08:00
__init__.py Bump up to v0.2.6 (#2157) 2023-12-17 10:34:56 -08:00
block.py [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
config.py Disable CUDA graph for SqueezeLLM (#2161) 2023-12-17 10:24:25 -08:00
logger.py [Fix] Fix duplicated logging messages (#1524) 2023-10-31 09:04:47 -07:00
outputs.py docs: add description (#1553) 2023-11-03 09:14:52 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py Add a flag to include stop string in output text (#1976) 2023-12-15 00:45:58 -08:00
sequence.py [FIX] Fix class naming (#1803) 2023-11-28 14:08:01 -08:00
utils.py Optimize model execution with CUDA graph (#1926) 2023-12-16 21:12:08 -08:00