vllm/vllm
2023-12-20 00:04:33 -08:00
..
core [FIX] Fix formatting error 2023-11-29 00:40:19 +00:00
engine Update Help Text for --gpu-memory-utilization Argument (#2183) 2023-12-18 11:33:24 -08:00
entrypoints Add SSL arguments to API servers (#2109) 2023-12-18 10:56:23 +08:00
model_executor Remove Sampler copy stream (#2209) 2023-12-20 00:04:33 -08:00
transformers_utils [Minor] Delete Llama tokenizer warnings (#2146) 2023-12-16 22:05:18 -08:00
worker Make _prepare_sample non-blocking and use pinned memory for input buffers (#2207) 2023-12-19 16:52:46 -08:00
__init__.py Bump up to v0.2.6 (#2157) 2023-12-17 10:34:56 -08:00
block.py [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
config.py [ROCm] Fixes for GPTQ on ROCm (#2180) 2023-12-18 10:41:04 -08:00
logger.py [Fix] Fix duplicated logging messages (#1524) 2023-10-31 09:04:47 -07:00
outputs.py docs: add description (#1553) 2023-11-03 09:14:52 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py Add a flag to include stop string in output text (#1976) 2023-12-15 00:45:58 -08:00
sequence.py [FIX] Fix class naming (#1803) 2023-11-28 14:08:01 -08:00
utils.py Optimize model execution with CUDA graph (#1926) 2023-12-16 21:12:08 -08:00