vllm/vllm
2024-08-01 18:44:16 -07:00
..
adapter_commons [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
assets [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00
attention [Kernel] Fix input for flashinfer prefill wrapper. (#7008) 2024-08-01 18:44:16 -07:00
core [Performance] Optimize get_seqs (#7051) 2024-08-01 18:29:52 -07:00
distributed PP comm optimization: replace send with partial send + allgather (#6695) 2024-07-31 20:15:42 -07:00
engine [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
entrypoints [Bugfix] Set SamplingParams.max_tokens for OpenAI requests if not provided by user (#6954) 2024-07-31 21:13:34 -07:00
executor [Bugfix] torch.set_num_threads() in multiproc_gpu_executor (#6802) 2024-07-26 22:15:20 -07:00
inputs [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Kernel][RFC] Refactor the punica kernel based on Triton (#5036) 2024-07-31 17:12:24 -07:00
model_executor [Misc] Support attention logits soft-capping with flash-attn (#7022) 2024-08-01 13:14:37 -07:00
multimodal [CI/Build] Fix mypy errors (#6968) 2024-07-30 19:49:48 -07:00
platforms [Misc] Add a wrapper for torch.inference_mode (#6618) 2024-07-21 18:43:11 -07:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
spec_decode [Bugfix] Fix broadcasting logic for multi_modal_kwargs (#6836) 2024-07-31 10:38:45 +08:00
transformers_utils [Performance] Optimize get_seqs (#7051) 2024-08-01 18:29:52 -07:00
triton_utils [Kernel][RFC] Refactor the punica kernel based on Triton (#5036) 2024-07-31 17:12:24 -07:00
usage [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00
worker PP comm optimization: replace send with partial send + allgather (#6695) 2024-07-31 20:15:42 -07:00
__init__.py [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
_custom_ops.py [Kernel][RFC] Refactor the punica kernel based on Triton (#5036) 2024-07-31 17:12:24 -07:00
_ipex_ops.py [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
block.py [core][misc] remove logical block (#5882) 2024-06-27 13:34:55 -07:00
config.py [Models] Support Qwen model with PP (#6974) 2024-08-01 12:40:43 -07:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [Kernel][RFC] Refactor the punica kernel based on Triton (#5036) 2024-07-31 17:12:24 -07:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Core] Reduce unnecessary compute when logprobs=None (#6532) 2024-07-29 16:47:31 +00:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Core] Reduce unnecessary compute when logprobs=None (#6532) 2024-07-29 16:47:31 +00:00
scripts.py [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
sequence.py [Performance] Optimize get_seqs (#7051) 2024-08-01 18:29:52 -07:00
tracing.py [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
utils.py [Bugfix] Fix broadcasting logic for multi_modal_kwargs (#6836) 2024-07-31 10:38:45 +08:00
version.py Bump version to 0.5.3.post1 (#6696) 2024-07-23 10:08:59 -07:00