vllm/benchmarks
Tran Quang Dai ea4adeddc1
[Bugfix] Fix E2EL mean and median stats (#9984)
Signed-off-by: daitran2k1 <tranquangdai7a@gmail.com>
2024-11-04 09:37:58 +00:00
..
cutlass_benchmarks [Kernel] Add per-tensor and per-token AZP epilogues (#5941) 2024-08-06 18:17:08 +00:00
kernels [Hardware] using current_platform.seed_everything (#9785) 2024-10-29 14:47:44 +00:00
overheads [Core] Deprecating block manager v1 and make block manager v2 default (#8704) 2024-10-17 11:38:15 -05:00
backend_request_func.py [Misc][OpenAI] deprecate max_tokens in favor of new max_completion_tokens field for chat completion endpoint (#9837) 2024-10-30 18:15:56 -07:00
benchmark_latency.py [Misc] Make benchmarks use EngineArgs (#9529) 2024-10-22 15:40:38 -07:00
benchmark_prefix_caching.py [Misc] Make benchmarks use EngineArgs (#9529) 2024-10-22 15:40:38 -07:00
benchmark_prioritization.py [Misc] Make benchmarks use EngineArgs (#9529) 2024-10-22 15:40:38 -07:00
benchmark_serving.py [Bugfix] Fix E2EL mean and median stats (#9984) 2024-11-04 09:37:58 +00:00
benchmark_throughput.py [Misc] Separate total and output tokens in benchmark_throughput.py (#8914) 2024-10-23 16:47:20 +00:00
launch_tgi_server.sh [benchmark] Update TGI version (#7917) 2024-08-27 15:07:53 -07:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
sonnet.txt feat(benchmarks): Add Prefix Caching Benchmark to Serving Benchmark (#3277) 2024-03-27 13:39:26 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json