vllm/benchmarks
Michael Feil c2b4a1bce9
[Doc] Add typing hints / mypy types cleanup (#3816)
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
2024-04-11 17:17:21 -07:00
..
kernels [Misc] Add indirection layer for custom ops (#3913) 2024-04-10 20:26:07 -07:00
backend_request_func.py [Doc] Add typing hints / mypy types cleanup (#3816) 2024-04-11 17:17:21 -07:00
benchmark_latency.py [Core][5/N] Fully working chunked prefill e2e (#3884) 2024-04-10 17:56:48 -07:00
benchmark_prefix_caching.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
benchmark_serving.py [Bugfix] Fix args in benchmark_serving (#3836) 2024-04-04 07:41:05 +00:00
benchmark_throughput.py [Core][5/N] Fully working chunked prefill e2e (#3884) 2024-04-10 17:56:48 -07:00
launch_tgi_server.sh Serving Benchmark Refactoring (#2433) 2024-02-12 22:53:00 -08:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
sonnet.txt feat(benchmarks): Add Prefix Caching Benchmark to Serving Benchmark (#3277) 2024-03-27 13:39:26 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json