vllm/benchmarks
Antoni Baum 9b945daaf1
[Experimental] Add multi-LoRA support (#1804)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
2024-01-23 15:26:37 -08:00
..
kernels Replace head_mapping params with num_kv_heads to attention kernel. (#1997) 2023-12-10 10:12:53 -08:00
benchmark_latency.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
benchmark_serving.py Fix progress bar and allow HTTPS in benchmark_serving.py (#2552) 2024-01-22 14:40:31 -08:00
benchmark_throughput.py Optimize model execution with CUDA graph (#1926) 2023-12-16 21:12:08 -08:00
launch_tgi_server.sh [Fix] Fix default port number in benchmark scripts (#265) 2023-06-26 13:15:35 -07:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json