vllm/benchmarks
2024-08-02 13:51:58 -07:00
..
cutlass_benchmarks [Kernel] Tuned int8 Cutlass Kernels for SM75 (T4) (#6996) 2024-07-31 14:40:32 -07:00
kernels [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
overheads [Frontend] Add FlexibleArgumentParser to support both underscore and dash in names (#5718) 2024-06-20 17:00:13 -06:00
backend_request_func.py [Bugfix] Fix snapshot download in serving benchmark (#6318) 2024-07-11 07:04:05 +00:00
benchmark_latency.py [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
benchmark_prefix_caching.py [Frontend] Add FlexibleArgumentParser to support both underscore and dash in names (#5718) 2024-06-20 17:00:13 -06:00
benchmark_serving.py [Bugfix] Benchmark serving script used global parameter 'args' in function 'sample_random_requests' (#6428) 2024-07-14 19:27:01 -07:00
benchmark_throughput.py [Hardware][Intel] OpenVINO vLLM backend (#5379) 2024-06-28 13:50:16 +00:00
launch_tgi_server.sh [Doc]Add documentation to benchmarking script when running TGI (#4920) 2024-05-20 20:16:57 +00:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
sonnet.txt feat(benchmarks): Add Prefix Caching Benchmark to Serving Benchmark (#3277) 2024-03-27 13:39:26 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json