vllm/csrc
2024-11-22 21:14:49 -08:00
..
attention Support Roberta embedding models (#9387) 2024-11-14 21:23:29 +00:00
core [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
cpu Fix: Build error seen on Power Architecture (#10421) 2024-11-19 09:34:57 -08:00
cutlass_extensions [Kernel] Initial Machete W4A8 support + Refactors (#9855) 2024-11-18 12:59:29 -07:00
mamba [BugFix][Kernel] Fix Illegal memory access in causal_conv1d in H100 (#9838) 2024-10-31 20:06:25 +00:00
moe [Performance][Kernel] Fused_moe Performance Improvement (#9384) 2024-10-24 15:37:52 -07:00
prepare_inputs [Core] Flashinfer - Remove advance step size restriction (#10282) 2024-11-13 16:29:32 +08:00
quantization [AMD] Add support for GGUF quantization on ROCm (#10254) 2024-11-22 21:14:49 -08:00
rocm [Kernel][Amd] Add fp8 kv cache support for rocm custom paged attention (#8577) 2024-09-19 17:37:57 +00:00
activation_kernels.cu [Kernel] add kernel for FATReLU (#9610) 2024-10-24 16:18:27 +08:00
cache_kernels.cu Add fp8 support to reshape_and_cache_flash (#6667) 2024-07-24 18:36:52 +00:00
cache.h Add fp8 support to reshape_and_cache_flash (#6667) 2024-07-24 18:36:52 +00:00
cuda_compat.h [Kernel][ROCm][AMD] enable fused topk_softmax kernel for moe layer (#4927) 2024-06-02 14:13:26 -07:00
cuda_utils_kernels.cu [Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047) 2024-06-09 16:23:30 -04:00
cuda_utils.h [Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174) 2024-08-20 07:09:33 -06:00
custom_all_reduce_test.cu [Core][Distributed] Refactor ipc buffer init in CustomAllreduce (#10030) 2024-11-06 23:50:47 -08:00
custom_all_reduce.cu [Core][Distributed] Refactor ipc buffer init in CustomAllreduce (#10030) 2024-11-06 23:50:47 -08:00
custom_all_reduce.cuh [Core][Distributed] Refactor ipc buffer init in CustomAllreduce (#10030) 2024-11-06 23:50:47 -08:00
dispatch_utils.h [Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047) 2024-06-09 16:23:30 -04:00
layernorm_kernels.cu [torch.compile] Fuse RMSNorm with quant (#9138) 2024-11-08 21:20:08 +00:00
layernorm_quant_kernels.cu [torch.compile] Fuse RMSNorm with quant (#9138) 2024-11-08 21:20:08 +00:00
ops.h [AMD] Add support for GGUF quantization on ROCm (#10254) 2024-11-22 21:14:49 -08:00
permute_cols.cu [Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin (#7701) 2024-09-23 13:46:26 -04:00
pos_encoding_kernels.cu [Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047) 2024-06-09 16:23:30 -04:00
torch_bindings.cpp [AMD] Add support for GGUF quantization on ROCm (#10254) 2024-11-22 21:14:49 -08:00
type_convert.cuh [torch.compile] Fuse RMSNorm with quant (#9138) 2024-11-08 21:20:08 +00:00