vllm/vllm/entrypoints
Chen Zhang 770ec6024f
[Model] Add support for the multi-modal Llama 3.2 model (#8811)
Co-authored-by: simon-mo <xmo@berkeley.edu>
Co-authored-by: Chang Su <chang.s.su@oracle.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
2024-09-25 13:29:32 -07:00
..
openai [Model] Add support for the multi-modal Llama 3.2 model (#8811) 2024-09-25 13:29:32 -07:00
__init__.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
api_server.py [Bugfix] Config got an unexpected keyword argument 'engine' (#8556) 2024-09-20 14:00:45 -07:00
chat_utils.py [Model] Add support for the multi-modal Llama 3.2 model (#8811) 2024-09-25 13:29:32 -07:00
launcher.py [Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157) 2024-09-18 13:56:58 +00:00
llm.py Revert "rename PromptInputs and inputs with backward compatibility (#8760) (#8810) 2024-09-25 10:36:26 -07:00
logger.py [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00