vllm/cacheflow
2023-03-29 16:38:48 -07:00
..
http_frontend FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00
master Add cache watermark to avoid frequent cache eviction (#11) 2023-03-29 16:38:48 -07:00
models FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00
parallel_utils Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00
worker Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00
block.py Support beam search & parallel generation (#7) 2023-03-10 09:58:21 -08:00
sampling_params.py FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00
sequence.py Minor 2023-03-26 08:00:39 +00:00
utils.py FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00