vllm/benchmarks
2024-09-25 10:36:26 -07:00
..
cutlass_benchmarks [Kernel] Add per-tensor and per-token AZP epilogues (#5941) 2024-08-06 18:17:08 +00:00
kernels [Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin (#7701) 2024-09-23 13:46:26 -04:00
overheads [Frontend] Add FlexibleArgumentParser to support both underscore and dash in names (#5718) 2024-06-20 17:00:13 -06:00
backend_request_func.py [Benchmark] Support sample from HF datasets and image input for benchmark_serving (#8495) 2024-09-17 07:34:27 +00:00
benchmark_latency.py Revert "rename PromptInputs and inputs with backward compatibility (#8760) (#8810) 2024-09-25 10:36:26 -07:00
benchmark_prefix_caching.py [Misc] Enhance prefix-caching benchmark tool (#6568) 2024-08-22 09:32:02 -07:00
benchmark_prioritization.py [Core] Adding Priority Scheduling (#5958) 2024-09-24 19:50:50 -07:00
benchmark_serving.py [Bugfix] fixing sonnet benchmark bug in benchmark_serving.py (#8616) 2024-09-19 05:24:24 +00:00
benchmark_throughput.py re-implement beam search on top of vllm core (#8726) 2024-09-23 22:08:12 -07:00
launch_tgi_server.sh [benchmark] Update TGI version (#7917) 2024-08-27 15:07:53 -07:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
sonnet.txt feat(benchmarks): Add Prefix Caching Benchmark to Serving Benchmark (#3277) 2024-03-27 13:39:26 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json