vllm/benchmarks
wbn dacaf5a400
Replace head_mapping params with num_kv_heads to attention kernel. (#1997)
Co-authored-by: wangguoya <wangguoya@baidu.com>
Co-authored-by: Yang Zhao <zhaoyangstar@foxmail.com>
2023-12-10 10:12:53 -08:00
..
kernels Replace head_mapping params with num_kv_heads to attention kernel. (#1997) 2023-12-10 10:12:53 -08:00
benchmark_latency.py Save pytorch profiler output for latency benchmark (#1871) 2023-12-05 20:55:55 -08:00
benchmark_serving.py Use monotonic time where appropriate (#1249) 2023-10-02 19:22:05 -07:00
benchmark_throughput.py Support max-model-len argument for throughput benchmark (#1858) 2023-11-30 08:10:24 -08:00
launch_tgi_server.sh [Fix] Fix default port number in benchmark scripts (#265) 2023-06-26 13:15:35 -07:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json