vllm/vllm
2024-02-06 11:38:38 -08:00
..
core [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
engine Remove eos tokens from output by default (#2611) 2024-02-04 14:32:42 -08:00
entrypoints set&get llm internal tokenizer instead of the TokenizerGroup (#2741) 2024-02-04 14:25:36 -08:00
lora Don't build punica kernels by default (#2605) 2024-01-26 15:19:19 -08:00
model_executor Add fused top-K softmax kernel for MoE (#2769) 2024-02-05 17:38:02 -08:00
transformers_utils [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
worker Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
__init__.py Bump up version to v0.3.0 (#2656) 2024-01-31 00:07:07 -08:00
block.py [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
config.py modelscope: fix issue when model parameter is not a model id but path of the model. (#2489) 2024-02-06 09:57:15 -08:00
logger.py Set local logging level via env variable (#2774) 2024-02-05 14:26:50 -08:00
outputs.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
prefix.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
sequence.py Fix default length_penalty to 1.0 (#2667) 2024-02-01 15:59:39 -08:00
test_utils.py Implement custom all reduce kernels (#2192) 2024-01-27 12:46:35 -08:00
utils.py [Minor] More fix of test_cache.py CI test failure (#2750) 2024-02-06 11:38:38 -08:00