Go to file
Woosuk Kwon d359cda5fa Minor
2023-03-26 08:00:39 +00:00
cacheflow Minor 2023-03-26 08:00:39 +00:00
csrc Add miscellaneous updates (#8) 2023-03-13 13:48:38 -07:00
tests/kernels Use FlashAttention for multi_query_kv_attention (#4) 2023-03-01 21:13:08 -08:00
.gitignore Add gitignore 2023-02-16 07:47:21 +00:00
README.md Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00
server.py Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00
setup.py Implement single_query_cached_kv_attention kernel (#3) 2023-03-01 15:02:19 -08:00

CacheFlow

Installation

pip install psutil numpy torch transformers
pip install flash-attn # This may take up to 10 mins.
pip install -e .

Run

ray start --head
python server.py [--tensor-parallel-size <N>]