vllm/tests/lora
2024-03-22 01:22:17 +00:00
..
__init__.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
conftest.py Migrate logits computation and gather to model_runner (#3233) 2024-03-20 23:25:01 +00:00
test_gemma.py Add LoRA support for Gemma (#3050) 2024-02-28 13:03:28 -08:00
test_layer_variation.py Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
test_layers.py Migrate logits computation and gather to model_runner (#3233) 2024-03-20 23:25:01 +00:00
test_llama.py Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
test_lora_manager.py Add LoRA support for Mixtral (#2831) 2024-02-14 00:55:45 +01:00
test_lora.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
test_mixtral.py Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
test_punica.py Add missing kernel for CodeLlama-34B on A/H100 (no tensor parallelism) when using Multi-LoRA. (#3350) 2024-03-13 12:18:25 -07:00
test_tokenizer_group.py Asynchronous tokenization (#2879) 2024-03-15 23:37:01 +00:00
test_utils.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
test_worker.py [Hardware][Neuron] Refactor neuron support (#3471) 2024-03-22 01:22:17 +00:00
utils.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00