vllm/benchmarks
Sage Moore ce4f5a29fb
Add Automatic Prefix Caching (#2762)
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
2024-03-02 00:50:01 -08:00
..
kernels Optimize Triton MoE Kernel (#2979) 2024-02-26 13:48:56 -08:00
backend_request_func.py Serving Benchmark Refactoring (#2433) 2024-02-12 22:53:00 -08:00
benchmark_latency.py [Minor] Fix benchmark_latency script (#2765) 2024-02-05 12:45:37 -08:00
benchmark_serving.py chore(vllm): codespell for spell checking (#2820) 2024-02-21 18:56:01 -08:00
benchmark_throughput.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
launch_tgi_server.sh Serving Benchmark Refactoring (#2433) 2024-02-12 22:53:00 -08:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json