This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
7836fdcc11
vllm
/
tests
/
quantization
History
Dipika Sikka
dd248f7675
[Misc] Update
w4a16
compressed-tensors
support to include
w8a16
(
#5794
)
2024-06-25 19:23:35 +00:00
..
__init__.py
[CI/Build] Move
test_utils.py
to
tests/utils.py
(
#4425
)
2024-05-13 23:50:09 +09:00
test_bitsandbytes.py
[CI/Build][REDO] Add is_quant_method_supported to control quantization test configurations (
#5466
)
2024-06-13 15:18:08 +00:00
test_compressed_tensors.py
[Misc] Update
w4a16
compressed-tensors
support to include
w8a16
(
#5794
)
2024-06-25 19:23:35 +00:00
test_configs.py
[mypy] Enable type checking for test directory (
#5017
)
2024-06-15 04:45:31 +00:00
test_fp8.py
[CI/Build][REDO] Add is_quant_method_supported to control quantization test configurations (
#5466
)
2024-06-13 15:18:08 +00:00
utils.py
[CI][BugFix] Flip is_quant_method_supported condition (
#5577
)
2024-06-16 14:07:34 +00:00