vllm/benchmarks
Woosuk Kwon e3e79e9e8a
Implement AWQ quantization support for LLaMA (#1032)
Co-authored-by: Robert Irvine <robert@seamlessml.com>
Co-authored-by: root <rirv938@gmail.com>
Co-authored-by: Casper <casperbh.96@gmail.com>
Co-authored-by: julian-q <julianhquevedo@gmail.com>
2023-09-16 00:03:37 -07:00
..
benchmark_latency.py Implement AWQ quantization support for LLaMA (#1032) 2023-09-16 00:03:37 -07:00
benchmark_serving.py fix: enable trust-remote-code in api server & benchmark. (#509) 2023-07-19 17:06:15 -07:00
benchmark_throughput.py Implement AWQ quantization support for LLaMA (#1032) 2023-09-16 00:03:37 -07:00
launch_tgi_server.sh [Fix] Fix default port number in benchmark scripts (#265) 2023-06-26 13:15:35 -07:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json