vllm/cacheflow/models
2023-05-05 02:01:08 -07:00
..
__init__.py Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00
activation.py Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
attention.py Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
gpt2.py Add support for GPT-2 (#60) 2023-05-04 02:59:56 -07:00
gpt_neox.py Add support for GPT-2 (#60) 2023-05-04 02:59:56 -07:00
input_metadata.py Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
layernorm.py Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
llama.py Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
memory_analyzer.py Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
model_utils.py Use dtype from model config & Add Dolly V2 (#63) 2023-05-04 03:05:37 -07:00
opt.py Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
sample.py Add support for GPT-2 (#60) 2023-05-04 02:59:56 -07:00
utils.py Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00