vllm/vllm
2024-04-08 18:28:36 +08:00
..
attention [Bugfix] Add kv_scale input parameter to CPU backend (#3840) 2024-04-04 04:33:08 +00:00
core [Core] latency optimization (#3890) 2024-04-06 19:14:06 -07:00
engine [Chunked Prefill][4/n] Chunked prefill scheduler. (#3853) 2024-04-05 10:17:58 -07:00
entrypoints Add option to completion API to truncate prompt tokens (#3144) 2024-04-05 10:15:42 -07:00
executor [Bugfix] Fix Llava inference with Tensor Parallelism. (#3883) 2024-04-07 22:54:13 +08:00
lora [BugFix] Use consistent logger everywhere (#3738) 2024-03-29 23:26:44 +00:00
model_executor [Model] add minicpm (#3893) 2024-04-08 18:28:36 +08:00
spec_decode [Bugfix] Add __init__.py files for vllm/core/block/ and vllm/spec_decode/ (#3798) 2024-04-02 12:35:31 -07:00
transformers_utils [BugFix] Pass tokenizer_config to local_tokenizer_group (#3754) 2024-04-03 20:31:46 -07:00
usage usage lib get version another way (#3735) 2024-03-29 15:57:08 -07:00
worker [Chunked Prefill][4/n] Chunked prefill scheduler. (#3853) 2024-04-05 10:17:58 -07:00
__init__.py [Core] enable out-of-tree model register (#3871) 2024-04-06 17:11:41 -07:00
block.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
config.py [Chunked Prefill][4/n] Chunked prefill scheduler. (#3853) 2024-04-05 10:17:58 -07:00
logger.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
outputs.py [BugFix][Frontend] Fix completion logprobs=0 error (#3731) 2024-03-29 09:38:21 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py Add option to completion API to truncate prompt tokens (#3144) 2024-04-05 10:15:42 -07:00
sequence.py [Chunked Prefill][4/n] Chunked prefill scheduler. (#3853) 2024-04-05 10:17:58 -07:00
test_utils.py [Core][Test] move local_rank to the last arg with default value(#3711) 2024-03-28 21:19:45 -07:00
utils.py Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00