Inspired by #5146, this PR improves FP8 quantize kernel by vectorizing data transfer to better utilize memory bandwidth. Microbenchmark shows that this improved kernel can achieve 1.0x-1.5x speedup (especially when hidden size is large). In details, we applied 3 optimizations: - Use inverted scale so that most divisions are changed to multiplications. - Unroll the loop by 4 times to improve ILP. - Use vectorized 4 to transfer data between HBM and SRAM. |
||
|---|---|---|
| .. | ||
| aqlm | ||
| awq | ||
| compressed_tensors | ||
| cutlass_w8a8 | ||
| fp8 | ||
| gptq | ||
| gptq_marlin | ||
| marlin | ||
| squeezellm | ||