vllm/vllm
2024-02-17 22:36:53 -08:00
..
core [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
engine Add code-revision config argument for Hugging Face Hub (#2892) 2024-02-17 22:36:53 -08:00
entrypoints multi-LoRA as extra models in OpenAI server (#2775) 2024-02-17 12:00:48 -08:00
lora [BugFix] Fix GC bug for LLM class (#2882) 2024-02-14 22:17:44 -08:00
model_executor Prefix Caching- fix t4 triton error (#2517) 2024-02-16 14:17:55 -08:00
transformers_utils Add code-revision config argument for Hugging Face Hub (#2892) 2024-02-17 22:36:53 -08:00
worker Don't use cupy NCCL for AMD backends (#2855) 2024-02-14 12:30:44 -08:00
__init__.py Bump up to v0.3.1 (#2887) 2024-02-16 15:05:18 -08:00
block.py [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
config.py Add code-revision config argument for Hugging Face Hub (#2892) 2024-02-17 22:36:53 -08:00
logger.py Set local logging level via env variable (#2774) 2024-02-05 14:26:50 -08:00
outputs.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
prefix.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
sequence.py Fix default length_penalty to 1.0 (#2667) 2024-02-01 15:59:39 -08:00
test_utils.py Use CuPy for CUDA graphs (#2811) 2024-02-13 11:32:06 -08:00
utils.py [Minor] More fix of test_cache.py CI test failure (#2750) 2024-02-06 11:38:38 -08:00