Commit Graph

863 Commits

Author SHA1 Message Date
Kunshang Ji
96b6f475dd
Remove hardcoded device="cuda" to support more devices (#2503)
Co-authored-by: Jiang Li <jiang1.li@intel.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
2024-02-01 15:46:39 -08:00
Pernekhan Utemuratov
c410f5d020
Use revision when downloading the quantization config file (#2697)
Co-authored-by: Pernekhan Utemuratov <pernekhan@deepinfra.com>
2024-02-01 15:41:58 -08:00
Simon Mo
bb8c697ee0
Update README for meetup slides (#2718) 2024-02-01 14:56:53 -08:00
Simon Mo
b9e96b17de
fix python 3.8 syntax (#2716) 2024-02-01 14:00:58 -08:00
zhaoyang-star
923797fea4
Fix compile error when using rocm (#2648) 2024-02-01 09:35:09 -08:00
Fengzhe Zhou
cd9e60c76c
Add Internlm2 (#2666) 2024-02-01 09:27:40 -08:00
Robert Shaw
93b38bea5d
Refactor Prometheus and Add Request Level Metrics (#2316) 2024-01-31 14:58:07 -08:00
Philipp Moritz
d0d93b92b1
Add unit test for Mixtral MoE layer (#2677) 2024-01-31 14:34:17 -08:00
Philipp Moritz
89efcf1ce5
[Minor] Fix test_cache.py CI test failure (#2684) 2024-01-31 10:12:11 -08:00
zspo
c664b0e683
fix some bugs (#2689) 2024-01-31 10:09:23 -08:00
Tao He
d69ff0cbbb
Fixes assertion failure in prefix caching: the lora index mapping should respect prefix_len (#2688)
Signed-off-by: Tao He <sighingnow@gmail.com>
2024-01-31 18:00:13 +01:00
Zhuohan Li
1af090b57d
Bump up version to v0.3.0 (#2656) 2024-01-31 00:07:07 -08:00
Woosuk Kwon
3dad944485
Add quantized mixtral support (#2673) 2024-01-30 16:34:10 -08:00
Woosuk Kwon
105a40f53a
[Minor] Fix false warning when TP=1 (#2674) 2024-01-30 14:39:40 -08:00
Philipp Moritz
bbe9bd9684
[Minor] Fix a small typo (#2672) 2024-01-30 13:40:37 -08:00
Vladimir
4f65af0e25
Add swap_blocks unit tests (#2616) 2024-01-30 09:30:50 -08:00
Wen Sun
d79ced3292
Fix 'Actor methods cannot be called directly' when using --engine-use-ray (#2664)
* fix: engine-useray complain

* fix: typo
2024-01-30 17:17:05 +01:00
Philipp Moritz
ab40644669
Fused MOE for Mixtral (#2542)
Co-authored-by: chen shen <scv119@gmail.com>
2024-01-29 22:43:37 -08:00
wangding zeng
5d60def02c
DeepseekMoE support with Fused MoE kernel (#2453)
Co-authored-by: roy <jasonailu87@gmail.com>
2024-01-29 21:19:48 -08:00
Rasmus Larsen
ea8489fce2
ROCm: Allow setting compilation target (#2581) 2024-01-29 10:52:31 -08:00
Hanzhi Zhou
1b20639a43
No repeated IPC open (#2642) 2024-01-29 10:46:29 -08:00
zhaoyang-star
b72af8f1ed
Fix error when tp > 1 (#2644)
Co-authored-by: zhaoyang-star <zhao.yang16@zte.com.cn>
2024-01-28 22:47:39 -08:00
zhaoyang-star
9090bf02e7
Support FP8-E5M2 KV Cache (#2279)
Co-authored-by: zhaoyang <zhao.yang16@zte.com.cn>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-01-28 16:43:54 -08:00
Simon Mo
7d648418b8
Update Ray version requirements (#2636) 2024-01-28 14:27:22 -08:00
Murali Andoorveedu
89be30fa7d
Small async_llm_engine refactor (#2618) 2024-01-27 23:28:37 -08:00
Woosuk Kwon
f8ecb84c02
Speed up Punica compilation (#2632) 2024-01-27 17:46:56 -08:00
Woosuk Kwon
5f036d2bcc
[Minor] Fix warning on Ray dependencies (#2630) 2024-01-27 15:43:40 -08:00
Hanzhi Zhou
380170038e
Implement custom all reduce kernels (#2192) 2024-01-27 12:46:35 -08:00
Xiang Xu
220a47627b
Use head_dim in config if exists (#2622) 2024-01-27 10:30:49 -08:00
Casper
beb89f68b4
AWQ: Up to 2.66x higher throughput (#2566) 2024-01-26 23:53:17 -08:00
Philipp Moritz
390b495ff3
Don't build punica kernels by default (#2605) 2024-01-26 15:19:19 -08:00
dakotamahan-stability
3a0e1fc070
Support for Stable LM 2 (#2598)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-01-26 12:45:19 -08:00
Hongxia Yang
6b7de1a030
[ROCm] add support to ROCm 6.0 and MI300 (#2274) 2024-01-26 12:41:10 -08:00
Vladimir
5265631d15
use a correct device when creating OptionalCUDAGuard (#2583) 2024-01-25 23:48:17 -08:00
Junyang Lin
2832e7b9f9
fix names and license for Qwen2 (#2589) 2024-01-24 22:37:51 -08:00
Simon Mo
3a7dd7e367
Support Batch Completion in Server (#2529) 2024-01-24 17:11:07 -08:00
LastWhisper
223c19224b
Fix the syntax error in the doc of supported_models (#2584) 2024-01-24 11:22:51 -08:00
Federico Galatolo
f1f6cc10c7
Added include_stop_str_in_output and length_penalty parameters to OpenAI API (#2562) 2024-01-24 10:21:56 -08:00
Nikola Borisov
3209b49033
[Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
Simon Mo
1e4277d2d1
lint: format all python file instead of just source code (#2567) 2024-01-23 15:53:06 -08:00
Antoni Baum
9b945daaf1
[Experimental] Add multi-LoRA support (#1804)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
2024-01-23 15:26:37 -08:00
Erfan Al-Hossami
9c1352eb57
[Feature] Simple API token authentication and pluggable middlewares (#1106) 2024-01-23 15:13:00 -08:00
Jason Zhu
7a0b011dd5
Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (#2553) 2024-01-22 14:47:25 -08:00
Harry Mellor
63e835cbcc
Fix progress bar and allow HTTPS in benchmark_serving.py (#2552) 2024-01-22 14:40:31 -08:00
Junyang Lin
94b5edeb53
Add qwen2 (#2495) 2024-01-22 14:34:21 -08:00
Philipp Moritz
ab7e6006d6
Fix https://github.com/vllm-project/vllm/issues/2540 (#2545) 2024-01-22 19:02:38 +01:00
Cade Daniel
18bfcdd05c
[Speculative decoding 2/9] Multi-step worker for draft model (#2424) 2024-01-21 16:31:47 -08:00
Jannis Schönleber
71d63ed72e
migrate pydantic from v1 to v2 (#2531) 2024-01-21 16:05:56 -08:00
Nick Hill
d75c40734a
[Fix] Keep scheduler.running as deque (#2523) 2024-01-20 22:36:09 -08:00
Junda Chen
5b23c3f26f
Add group as an argument in broadcast ops (#2522) 2024-01-20 16:00:26 -08:00