vllm/vllm/lora
Jiaxin Shan 42c7f66a38
[Core] Support dynamically loading Lora adapter from HuggingFace (#6234)
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
2024-07-22 15:42:40 -07:00
..
__init__.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
fully_sharded_layers.py [Bugfix] Add fully sharded layer for QKVParallelLinearWithLora (#5665) 2024-06-21 04:46:28 +00:00
layers.py [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
lora.py [Model] Add base class for LoRA-supported models (#5018) 2024-06-27 16:03:04 +08:00
models.py [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
punica.py [hardware][misc] introduce platform abstraction (#6080) 2024-07-02 20:12:22 -07:00
request.py [Core] Support dynamically loading Lora adapter from HuggingFace (#6234) 2024-07-22 15:42:40 -07:00
utils.py [Core] Support dynamically loading Lora adapter from HuggingFace (#6234) 2024-07-22 15:42:40 -07:00
worker_manager.py [Core] Support dynamically loading Lora adapter from HuggingFace (#6234) 2024-07-22 15:42:40 -07:00