vllm/csrc/quantization
2024-05-16 09:55:29 -04:00
..
aqlm AQLM CUDA support (#3287) 2024-04-23 13:59:33 -04:00
awq Refactor 2 awq gemm kernels into m16nXk32 (#2723) 2024-02-12 11:02:17 -08:00
fp8 [Kernel] Refactor FP8 kv-cache with NVIDIA float8_e4m3 support (#4535) 2024-05-09 18:04:17 -06:00
gptq [Core] Set linear_weights directly on the layer (#3977) 2024-04-11 16:35:51 -04:00
gptq_marlin [Kernel] add bfloat16 support for gptq marlin kernel (#4788) 2024-05-16 09:55:29 -04:00
marlin [Bugfix] Fix marlin kernel crash on H100 (#4218) 2024-04-24 10:35:01 -07:00
squeezellm Enable CUDA graph for GPTQ & SqueezeLLM (#2318) 2024-01-03 09:52:29 -08:00