This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
47f0954af0
vllm
/
csrc
/
quantization
History
Michael Goin
47f0954af0
[Kernel] Expand FP8 support to Ampere GPUs using FP8 Marlin (
#5975
)
2024-07-03 17:38:00 +00:00
..
aqlm
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
awq
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
compressed_tensors
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
cutlass_w8a8
[Bugfix] Fix compute datatype for cutlass 3.x epilogues (
#5931
)
2024-06-28 17:10:34 +00:00
fp8
[Kernel] Expand FP8 support to Ampere GPUs using FP8 Marlin (
#5975
)
2024-07-03 17:38:00 +00:00
gptq
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
gptq_marlin
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
marlin
[Bugfix] Fix CUDA version check for mma warning suppression (
#5642
)
2024-06-18 23:48:49 +00:00
squeezellm
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00