vllm/vllm
wbn dacaf5a400
Replace head_mapping params with num_kv_heads to attention kernel. (#1997)
Co-authored-by: wangguoya <wangguoya@baidu.com>
Co-authored-by: Yang Zhao <zhaoyangstar@foxmail.com>
2023-12-10 10:12:53 -08:00
..
core [FIX] Fix formatting error 2023-11-29 00:40:19 +00:00
engine Merge EmbeddedLLM/vllm-rocm into vLLM main (#1836) 2023-12-07 23:16:52 -08:00
entrypoints Fix OpenAI server completion_tokens referenced before assignment (#1996) 2023-12-09 21:01:21 -08:00
model_executor Replace head_mapping params with num_kv_heads to attention kernel. (#1997) 2023-12-10 10:12:53 -08:00
transformers_utils Fix Baichuan tokenizer error (#1874) 2023-11-30 18:35:50 -08:00
worker Fix broken sampler tests (#1896) 2023-12-02 16:06:17 -08:00
__init__.py Bump up to v0.2.3 (#1903) 2023-12-03 12:27:47 -08:00
block.py [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
config.py Merge EmbeddedLLM/vllm-rocm into vLLM main (#1836) 2023-12-07 23:16:52 -08:00
logger.py [Fix] Fix duplicated logging messages (#1524) 2023-10-31 09:04:47 -07:00
outputs.py docs: add description (#1553) 2023-11-03 09:14:52 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py add custom server params (#1868) 2023-12-03 12:59:18 -08:00
sequence.py [FIX] Fix class naming (#1803) 2023-11-28 14:08:01 -08:00
utils.py Merge EmbeddedLLM/vllm-rocm into vLLM main (#1836) 2023-12-07 23:16:52 -08:00