vllm/vllm
2024-11-09 10:09:48 +00:00
..
adapter_commons [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
assets [CI/Build] Update CPU tests to include all "standard" tests (#5481) 2024-11-08 23:30:04 +08:00
attention [Feature] [Spec decode]: Combine chunked prefill with speculative decoding (#9291) 2024-11-07 08:15:14 -08:00
compilation [Bugfix] SymIntArrayRef expected to contain concrete integers (#10170) 2024-11-08 14:44:18 -08:00
core [Feature] [Spec decode]: Combine chunked prefill with speculative decoding (#9291) 2024-11-07 08:15:14 -08:00
distributed [Bugfix][XPU] Fix xpu tp by introducing XpuCommunicator (#10144) 2024-11-08 09:41:03 +00:00
engine [Feature] [Spec decode]: Combine chunked prefill with speculative decoding (#9291) 2024-11-07 08:15:14 -08:00
entrypoints bugfix: fix the bug that stream generate not work (#2756) 2024-11-09 10:09:48 +00:00
executor [Misc] Fix ImportError causing by triton (#9493) 2024-11-08 05:08:51 +00:00
inputs [Misc] Consolidate ModelConfig code related to HF config (#10104) 2024-11-07 06:00:21 +00:00
logging_utils Rename vllm.logging to vllm.logging_utils (#10134) 2024-11-08 20:53:24 +00:00
lora [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
model_executor [bugfix] fix broken tests of mlp speculator (#10177) 2024-11-09 00:04:50 -08:00
multimodal [0/N] Rename MultiModalInputs to MultiModalKwargs (#10040) 2024-11-09 11:31:02 +08:00
platforms [Hardware][Intel-Gaudi] Add Intel Gaudi (HPU) inference backend (#6143) 2024-11-06 01:09:10 -08:00
plugins [5/N] pass the whole config to model (#9983) 2024-11-09 14:17:28 +08:00
profiler [misc] CUDA Time Layerwise Profiler (#8337) 2024-10-17 10:36:09 -04:00
prompt_adapter [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
spec_decode [0/N] Rename MultiModalInputs to MultiModalKwargs (#10040) 2024-11-09 11:31:02 +08:00
transformers_utils Fix edge case Mistral tokenizer (#10152) 2024-11-08 15:42:27 +00:00
triton_utils [XPU] avoid triton import for xpu (#9440) 2024-10-24 05:14:00 +00:00
usage mypy: check additional directories (#9162) 2024-10-08 22:08:22 +00:00
v1 [V1] Fix non-cudagraph op name (#10166) 2024-11-08 10:23:04 -08:00
vllm_flash_attn [ci][build] fix vllm-flash-attn (#8699) 2024-09-21 23:24:58 -07:00
worker [0/N] Rename MultiModalInputs to MultiModalKwargs (#10040) 2024-11-09 11:31:02 +08:00
__init__.py [Core] renamePromptInputs and inputs (#8876) 2024-09-26 20:35:15 -07:00
_custom_ops.py [Kernel][Triton] Add Triton implementation for scaled_mm_triton to support fp8 and int8 SmoothQuant, symmetric case (#9857) 2024-11-08 19:59:22 -05:00
_ipex_ops.py [Misc][XPU] Upgrade to Pytorch 2.5 for xpu backend (#9823) 2024-11-06 17:29:03 -08:00
beam_search.py [Frontend] re-enable multi-modality input in the new beam search implementation (#9427) 2024-10-29 11:49:47 +00:00
block.py [mypy] Enable mypy type checking for vllm/core (#7229) 2024-08-28 07:11:14 +08:00
config.py Disable spec-decode + chunked-prefill for draft models with tensor parallelism > 1 (#10136) 2024-11-08 15:56:18 +00:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [torch.compile] Fuse RMSNorm with quant (#9138) 2024-11-08 21:20:08 +00:00
forward_context.py [misc] add forward context for attention (#9029) 2024-10-03 12:09:42 -07:00
logger.py Rename vllm.logging to vllm.logging_utils (#10134) 2024-11-08 20:53:24 +00:00
logits_process.py [Frontend] Bad words sampling parameter (#9717) 2024-10-26 16:29:38 +00:00
outputs.py [core] move parallel sampling out from vllm core (#9302) 2024-10-22 00:31:44 +00:00
pooling_params.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix][Frontend] Reject guided decoding in multistep mode (#9892) 2024-11-01 01:09:46 +00:00
scalar_type.py [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
scripts.py [Frontend] Add Early Validation For Chat Template / Tool Call Parser (#9151) 2024-10-08 14:31:26 +00:00
sequence.py [Core] Make encoder-decoder inputs a nested structure to be more composable (#9604) 2024-11-05 10:07:31 +08:00
tracing.py [misc] hide best_of from engine (#9261) 2024-10-10 21:30:44 -07:00
utils.py [Misc] Consolidate ModelConfig code related to HF config (#10104) 2024-11-07 06:00:21 +00:00
version.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00