vllm/csrc
Woosuk Kwon e3e79e9e8a
Implement AWQ quantization support for LLaMA (#1032)
Co-authored-by: Robert Irvine <robert@seamlessml.com>
Co-authored-by: root <rirv938@gmail.com>
Co-authored-by: Casper <casperbh.96@gmail.com>
Co-authored-by: julian-q <julianhquevedo@gmail.com>
2023-09-16 00:03:37 -07:00
..
attention [FIX] Fix Alibi implementation in PagedAttention kernel (#945) 2023-09-07 15:53:14 -07:00
quantization/awq Implement AWQ quantization support for LLaMA (#1032) 2023-09-16 00:03:37 -07:00
activation_kernels.cu Avoid compiling kernels for double data type (#933) 2023-09-02 14:59:47 +09:00
activation.cpp Implement approximate GELU kernels (#828) 2023-08-23 07:43:21 +09:00
attention.cpp Optimize MQA Kernel (#452) 2023-07-14 20:06:40 -04:00
cache_kernels.cu Avoid compiling kernels for double data type (#933) 2023-09-02 14:59:47 +09:00
cache.cpp Memcpy kernel for flash attention (#29) 2023-04-10 18:22:49 -07:00
dispatch_utils.h Avoid compiling kernels for double data type (#933) 2023-09-02 14:59:47 +09:00
layernorm_kernels.cu Avoid compiling kernels for double data type (#933) 2023-09-02 14:59:47 +09:00
layernorm.cpp Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
pos_encoding_kernels.cu [BugFix] Implement RoPE for GPT-J (#941) 2023-09-06 11:54:33 +09:00
pos_encoding.cpp [BugFix] Implement RoPE for GPT-J (#941) 2023-09-06 11:54:33 +09:00
quantization.cpp Implement AWQ quantization support for LLaMA (#1032) 2023-09-16 00:03:37 -07:00
reduction_utils.cuh Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00