vllm/vllm/model_executor/layers
Woosuk Kwon e3e79e9e8a
Implement AWQ quantization support for LLaMA (#1032)
Co-authored-by: Robert Irvine <robert@seamlessml.com>
Co-authored-by: root <rirv938@gmail.com>
Co-authored-by: Casper <casperbh.96@gmail.com>
Co-authored-by: julian-q <julianhquevedo@gmail.com>
2023-09-16 00:03:37 -07:00
..
quantized_linear Implement AWQ quantization support for LLaMA (#1032) 2023-09-16 00:03:37 -07:00
__init__.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
activation.py Implement approximate GELU kernels (#828) 2023-08-23 07:43:21 +09:00
attention.py Use FP32 in RoPE initialization (#1004) 2023-09-11 00:26:35 -07:00
layernorm.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
sampler.py [FIX] Minor bug fixes (#1035) 2023-09-13 16:38:12 -07:00