vllm/benchmarks
2024-03-03 22:48:27 -08:00
..
kernels Optimize Triton MoE Kernel (#2979) 2024-02-26 13:48:56 -08:00
backend_request_func.py Serving Benchmark Refactoring (#2433) 2024-02-12 22:53:00 -08:00
benchmark_latency.py Make it easy to profile workers with nsight (#3162) 2024-03-03 16:19:13 -08:00
benchmark_prefix_caching.py [Minor Fix] Remove unused code in benchmark_prefix_caching.py (#3171) 2024-03-03 22:48:27 -08:00
benchmark_serving.py chore(vllm): codespell for spell checking (#2820) 2024-02-21 18:56:01 -08:00
benchmark_throughput.py [FIX] Fix styles in automatic prefix caching & add a automatic prefix caching benchmark (#3158) 2024-03-03 14:37:18 -08:00
launch_tgi_server.sh Serving Benchmark Refactoring (#2433) 2024-02-12 22:53:00 -08:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json