vllm/vllm/model_executor/parallel_utils
Woosuk Kwon e3e79e9e8a
Implement AWQ quantization support for LLaMA (#1032)
Co-authored-by: Robert Irvine <robert@seamlessml.com>
Co-authored-by: root <rirv938@gmail.com>
Co-authored-by: Casper <casperbh.96@gmail.com>
Co-authored-by: julian-q <julianhquevedo@gmail.com>
2023-09-16 00:03:37 -07:00
..
tensor_parallel Implement AWQ quantization support for LLaMA (#1032) 2023-09-16 00:03:37 -07:00
__init__.py fixed tensor parallel is not defined (#564) 2023-07-25 14:16:51 -07:00
parallel_state.py Add Falcon support (new) (#592) 2023-08-02 14:04:39 -07:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

The files in this folder are ported from Megatron-LM. We only keep the codes that are used in inference.