vllm/tests/lora
Terry 2a543d6efe
Add LoRA support for Mixtral (#2831)
* add mixtral lora support

* formatting

* fix incorrectly ported logic

* polish tests

* minor fixes and refactoring

* minor fixes

* formatting

* rename and remove redundant logic

* refactoring

* refactoring

* minor fix

* minor refactoring

* fix code smell
2024-02-14 00:55:45 +01:00
..
__init__.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
conftest.py Add LoRA support for Mixtral (#2831) 2024-02-14 00:55:45 +01:00
test_layers.py Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
test_llama.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
test_lora_manager.py Add LoRA support for Mixtral (#2831) 2024-02-14 00:55:45 +01:00
test_lora.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
test_mixtral.py Add LoRA support for Mixtral (#2831) 2024-02-14 00:55:45 +01:00
test_punica.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
test_tokenizer.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
test_utils.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
test_worker.py Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
utils.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00