vllm/benchmarks
zhaoyang-star 9090bf02e7
Support FP8-E5M2 KV Cache (#2279)
Co-authored-by: zhaoyang <zhao.yang16@zte.com.cn>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-01-28 16:43:54 -08:00
..
kernels Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
benchmark_latency.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
benchmark_serving.py lint: format all python file instead of just source code (#2567) 2024-01-23 15:53:06 -08:00
benchmark_throughput.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
launch_tgi_server.sh [Fix] Fix default port number in benchmark scripts (#265) 2023-06-26 13:15:35 -07:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json