vllm/benchmarks
Philipp Moritz cfc15a1031
Optimize Triton MoE Kernel (#2979)
Co-authored-by: Cade Daniel <edacih@gmail.com>
2024-02-26 13:48:56 -08:00
..
kernels Optimize Triton MoE Kernel (#2979) 2024-02-26 13:48:56 -08:00
backend_request_func.py Serving Benchmark Refactoring (#2433) 2024-02-12 22:53:00 -08:00
benchmark_latency.py [Minor] Fix benchmark_latency script (#2765) 2024-02-05 12:45:37 -08:00
benchmark_serving.py chore(vllm): codespell for spell checking (#2820) 2024-02-21 18:56:01 -08:00
benchmark_throughput.py Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
launch_tgi_server.sh Serving Benchmark Refactoring (#2433) 2024-02-12 22:53:00 -08:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json