vllm/vllm
2024-08-02 22:40:19 -07:00
..
adapter_commons [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
assets [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00
attention [Bugfix] Fix block table for seqs that have prefix cache hits (#7018) 2024-08-02 22:38:15 -07:00
core [Performance] Optimize get_seqs (#7051) 2024-08-01 18:29:52 -07:00
distributed PP comm optimization: replace send with partial send + allgather (#6695) 2024-07-31 20:15:42 -07:00
engine [ Frontend ] Multiprocessing for OpenAI Server with zeromq (#6883) 2024-08-02 18:27:28 -07:00
entrypoints [Frontend] Factor out chat message parsing (#7055) 2024-08-02 21:31:27 -07:00
executor [Misc] Revive to use loopback address for driver IP (#7091) 2024-08-02 15:50:00 -07:00
inputs [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [LoRA] ReplicatedLinear support LoRA (#7081) 2024-08-02 22:40:19 -07:00
model_executor [Model] Refactor and decouple weight loading logic for InternVL2 model (#7067) 2024-08-02 22:36:14 -07:00
multimodal [CI/Build] Fix mypy errors (#6968) 2024-07-30 19:49:48 -07:00
platforms [Misc] Add a wrapper for torch.inference_mode (#6618) 2024-07-21 18:43:11 -07:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
spec_decode [Bugfix] Fix broadcasting logic for multi_modal_kwargs (#6836) 2024-07-31 10:38:45 +08:00
transformers_utils [ Frontend ] Multiprocessing for OpenAI Server with zeromq (#6883) 2024-08-02 18:27:28 -07:00
triton_utils [Kernel][RFC] Refactor the punica kernel based on Triton (#5036) 2024-07-31 17:12:24 -07:00
usage [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00
worker [misc] add a flag to enable compile (#7092) 2024-08-02 16:18:45 -07:00
__init__.py [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
_core_ext.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
_custom_ops.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
_ipex_ops.py [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
block.py [core][misc] remove logical block (#5882) 2024-06-27 13:34:55 -07:00
config.py [Models] Support Qwen model with PP (#6974) 2024-08-01 12:40:43 -07:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [ Frontend ] Multiprocessing for OpenAI Server with zeromq (#6883) 2024-08-02 18:27:28 -07:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Core] Reduce unnecessary compute when logprobs=None (#6532) 2024-07-29 16:47:31 +00:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Core] Reduce unnecessary compute when logprobs=None (#6532) 2024-07-29 16:47:31 +00:00
scalar_type.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
scripts.py [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
sequence.py [Performance] Optimize get_seqs (#7051) 2024-08-01 18:29:52 -07:00
tracing.py [ Frontend ] Multiprocessing for OpenAI Server with zeromq (#6883) 2024-08-02 18:27:28 -07:00
utils.py [ Frontend ] Multiprocessing for OpenAI Server with zeromq (#6883) 2024-08-02 18:27:28 -07:00
version.py Bump version to 0.5.3.post1 (#6696) 2024-07-23 10:08:59 -07:00