vllm/vllm/model_executor
zhaoyang-star 9090bf02e7
Support FP8-E5M2 KV Cache (#2279)
Co-authored-by: zhaoyang <zhao.yang16@zte.com.cn>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-01-28 16:43:54 -08:00
..
layers Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
models Support for Stable LM 2 (#2598) 2024-01-26 12:45:19 -08:00
parallel_utils Implement custom all reduce kernels (#2192) 2024-01-27 12:46:35 -08:00
__init__.py Refactor Worker & InputMetadata (#1843) 2023-11-29 22:16:37 -08:00
input_metadata.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
model_loader.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
sampling_metadata.py Use NCCL instead of ray for control-plane communication to remove serialization overhead (#2221) 2024-01-03 11:30:22 -08:00
utils.py TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622) 2023-11-15 22:50:41 -08:00
weight_utils.py [Bugfix] fix load local safetensors model (#2512) 2024-01-19 16:26:16 -08:00