vllm/tests/lora/data
SangBin Cho 2e9a2227ec
[Lora] Support long context lora (#4787)
Currently we need to call rotary embedding kernel for each LoRA, which makes it hard to serve multiple long context length LoRA. Add batched rotary embedding kernel and pipe it through.

It replaces the rotary embedding layer to the one that is aware of multiple cos-sin-cache per scaling factors.

Follow up of https://github.com/vllm-project/vllm/pull/3095/files
2024-05-18 16:05:23 +09:00
..
__init__.py [Lora] Support long context lora (#4787) 2024-05-18 16:05:23 +09:00
long_context_test_data.py [Lora] Support long context lora (#4787) 2024-05-18 16:05:23 +09:00