This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
main
vllm
/
csrc
/
quantization
History
kliuae
7c25fe45a6
[AMD] Add support for GGUF quantization on ROCm (
#10254
)
2024-11-22 21:14:49 -08:00
..
aqlm
[Kernel] fix types used in aqlm and ggml kernels to support dynamo (
#7596
)
2024-08-16 14:00:11 -07:00
awq
[CI/Build] Suppress divide-by-zero and missing return statement warnings (
#7001
)
2024-08-05 16:00:01 -04:00
compressed_tensors
[BugFix] [Kernel] Fix GPU SEGV occurring in int8 kernels (
#9391
)
2024-10-17 01:34:06 +00:00
cutlass_w8a8
[Kernel] Initial Machete W4A8 support + Refactors (
#9855
)
2024-11-18 12:59:29 -07:00
fp8
[torch.compile] Fuse RMSNorm with quant (
#9138
)
2024-11-08 21:20:08 +00:00
gguf
[AMD] Add support for GGUF quantization on ROCm (
#10254
)
2024-11-22 21:14:49 -08:00
gptq
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
gptq_marlin
[Model][Quantization] HQQ support through Marlin kernel expansion (
#9766
)
2024-11-19 13:31:12 -08:00
machete
[Kernel] Initial Machete W4A8 support + Refactors (
#9855
)
2024-11-18 12:59:29 -07:00
marlin
[Bugfix] Marlin 2:4 temp fix for large M dim (>256) (
#10464
)
2024-11-19 19:40:33 -08:00