vllm/vllm
Philipp Moritz 7eacffd951
Migrate InternLMForCausalLM to LlamaForCausalLM (#2860)
Co-authored-by: Roy <jasonailu87@gmail.com>
2024-02-13 17:12:05 -08:00
..
core [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
engine Use CuPy for CUDA graphs (#2811) 2024-02-13 11:32:06 -08:00
entrypoints set&get llm internal tokenizer instead of the TokenizerGroup (#2741) 2024-02-04 14:25:36 -08:00
lora Add LoRA support for Mixtral (#2831) 2024-02-14 00:55:45 +01:00
model_executor Migrate InternLMForCausalLM to LlamaForCausalLM (#2860) 2024-02-13 17:12:05 -08:00
transformers_utils Remove Yi model definition, please use LlamaForCausalLM instead (#2854) 2024-02-13 14:22:22 -08:00
worker Add LoRA support for Mixtral (#2831) 2024-02-14 00:55:45 +01:00
__init__.py Bump up version to v0.3.0 (#2656) 2024-01-31 00:07:07 -08:00
block.py [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
config.py Disable custom all reduce by default (#2808) 2024-02-08 09:58:03 -08:00
logger.py Set local logging level via env variable (#2774) 2024-02-05 14:26:50 -08:00
outputs.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
prefix.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
sequence.py Fix default length_penalty to 1.0 (#2667) 2024-02-01 15:59:39 -08:00
test_utils.py Use CuPy for CUDA graphs (#2811) 2024-02-13 11:32:06 -08:00
utils.py [Minor] More fix of test_cache.py CI test failure (#2750) 2024-02-06 11:38:38 -08:00