vllm/vllm/model_executor
Philipp Moritz cfc15a1031
Optimize Triton MoE Kernel (#2979)
Co-authored-by: Cade Daniel <edacih@gmail.com>
2024-02-26 13:48:56 -08:00
..
layers Optimize Triton MoE Kernel (#2979) 2024-02-26 13:48:56 -08:00
models Optimize GeGLU layer in Gemma (#2975) 2024-02-21 20:17:52 -08:00
parallel_utils chore(vllm): codespell for spell checking (#2820) 2024-02-21 18:56:01 -08:00
__init__.py Refactor Worker & InputMetadata (#1843) 2023-11-29 22:16:37 -08:00
input_metadata.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
model_loader.py Add LoRA support for Mixtral (#2831) 2024-02-14 00:55:45 +01:00
sampling_metadata.py Support per-request seed (#2514) 2024-02-21 11:47:00 -08:00
utils.py TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622) 2023-11-15 22:50:41 -08:00
weight_utils.py Use revision when downloading the quantization config file (#2697) 2024-02-01 15:41:58 -08:00