Signed-off-by: Mozhou <spli161006@gmail.com> Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| cutlass_benchmarks | ||
| kernels | ||
| overheads | ||
| backend_request_func.py | ||
| benchmark_latency.py | ||
| benchmark_prefix_caching.py | ||
| benchmark_prioritization.py | ||
| benchmark_serving.py | ||
| benchmark_throughput.py | ||
| launch_tgi_server.sh | ||
| README.md | ||
| sonnet.txt | ||
Benchmarking vLLM
Downloading the ShareGPT dataset
You can download the dataset by running:
wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
Downloading the ShareGPT4V dataset
The json file refers to several image datasets (coco, llava, etc.). The benchmark scripts will ignore a datapoint if the referred image is missing.
wget https://huggingface.co/datasets/Lin-Chen/ShareGPT4V/resolve/main/sharegpt4v_instruct_gpt4-vision_cap100k.json
mkdir coco -p
wget http://images.cocodataset.org/zips/train2017.zip -O coco/train2017.zip
unzip coco/train2017.zip -d coco/