vllm/tests/lora
youkaichao 63e7176f26
[Core][Refactor] move parallel_utils into vllm/distributed (#3950)
[WIP][Core][Refactor] move vllm/model_executor/parallel_utils into vllm/distributed and vllm/device_communicators (#3950)
2024-04-10 15:33:30 -07:00
..
__init__.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
conftest.py [Core][Refactor] move parallel_utils into vllm/distributed (#3950) 2024-04-10 15:33:30 -07:00
test_baichuan.py Enable more models to inference based on LoRA (#3382) 2024-03-25 18:09:31 -07:00
test_chatglm3.py Enable more models to inference based on LoRA (#3382) 2024-03-25 18:09:31 -07:00
test_gemma.py Add LoRA support for Gemma (#3050) 2024-02-28 13:03:28 -08:00
test_layer_variation.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
test_layers.py Enable more models to inference based on LoRA (#3382) 2024-03-25 18:09:31 -07:00
test_llama.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
test_lora_checkpoints.py [Misc] Avoid loading incorrect LoRA config (#3777) 2024-04-09 19:47:15 -07:00
test_lora_manager.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
test_lora.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
test_mixtral.py Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
test_punica.py [Kernel] support non-zero cuda devices in punica kernels (#3636) 2024-03-27 00:37:42 +00:00
test_tokenizer_group.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
test_utils.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
test_worker.py [Misc] [Core] Implement RFC "Augment BaseExecutor interfaces to enable hardware-agnostic speculative decoding" (#3837) 2024-04-09 11:44:15 -07:00
utils.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00