This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
83bdcb6ac3
vllm
/
tests
/
quantization
History
youkaichao
614aa51203
[misc][cuda] use nvml to avoid accidentally cuda initialization (
#6007
)
2024-06-30 20:07:34 -07:00
..
__init__.py
[CI/Build] Move
test_utils.py
to
tests/utils.py
(
#4425
)
2024-05-13 23:50:09 +09:00
test_bitsandbytes.py
[CI/Build][REDO] Add is_quant_method_supported to control quantization test configurations (
#5466
)
2024-06-13 15:18:08 +00:00
test_compressed_tensors.py
[ Misc ] Refactor w8a8 to use
process_weights_after_load
(Simplify Weight Loading) (
#5940
)
2024-06-30 23:06:27 +00:00
test_configs.py
[mypy] Enable type checking for test directory (
#5017
)
2024-06-15 04:45:31 +00:00
test_fp8.py
[ Misc ] Refactor w8a8 to use
process_weights_after_load
(Simplify Weight Loading) (
#5940
)
2024-06-30 23:06:27 +00:00
utils.py
[misc][cuda] use nvml to avoid accidentally cuda initialization (
#6007
)
2024-06-30 20:07:34 -07:00