vllm/vllm
youkaichao af7380d83b
[torch.compile] fix cpu broken code (#9947)
Signed-off-by: youkaichao <youkaichao@gmail.com>
2024-11-01 23:35:47 -07:00
..
adapter_commons [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
assets [Model][VLM] Add LLaVA-Onevision model support (#8486) 2024-09-22 10:51:44 -07:00
attention [Encoder Decoder] Add flash_attn kernel support for encoder-decoder models (#9559) 2024-11-01 23:22:49 -07:00
compilation [torch.compile] use interpreter with stable api from pytorch (#9889) 2024-11-01 11:50:37 -07:00
core [Core][VLM] Add precise multi-modal placeholder tracking (#8346) 2024-11-01 16:21:10 -07:00
distributed [torch.compile] directly register custom op (#9896) 2024-10-31 21:56:09 -07:00
engine [Bugfix] PicklingError on RayTaskError (#9934) 2024-11-01 22:08:23 +00:00
entrypoints [Frontend] Use a proper chat template for VLM2Vec (#9912) 2024-11-01 14:09:07 +00:00
executor [1/N] pass the complete config from engine to executor (#9933) 2024-11-01 13:51:57 -07:00
inputs [Core][VLM] Add precise multi-modal placeholder tracking (#8346) 2024-11-01 16:21:10 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Model][LoRA]LoRA support added for Qwen (#9622) 2024-10-29 04:14:07 +00:00
model_executor [Encoder Decoder] Add flash_attn kernel support for encoder-decoder models (#9559) 2024-11-01 23:22:49 -07:00
multimodal [Core][VLM] Add precise multi-modal placeholder tracking (#8346) 2024-11-01 16:21:10 -07:00
platforms [torch.compile] rework compile control with piecewise cudagraph (#9715) 2024-10-29 23:03:49 -07:00
plugins [torch.compile] rework compile control with piecewise cudagraph (#9715) 2024-10-29 23:03:49 -07:00
profiler [misc] CUDA Time Layerwise Profiler (#8337) 2024-10-17 10:36:09 -04:00
prompt_adapter [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
spec_decode [Misc] SpecDecodeWorker supports profiling (#9719) 2024-10-27 04:18:03 +00:00
transformers_utils [Bugfix] Fix edge cases for MistralTokenizer (#9625) 2024-11-01 10:33:15 -07:00
triton_utils [XPU] avoid triton import for xpu (#9440) 2024-10-24 05:14:00 +00:00
usage mypy: check additional directories (#9162) 2024-10-08 22:08:22 +00:00
v1 [1/N] pass the complete config from engine to executor (#9933) 2024-11-01 13:51:57 -07:00
vllm_flash_attn [ci][build] fix vllm-flash-attn (#8699) 2024-09-21 23:24:58 -07:00
worker [Encoder Decoder] Add flash_attn kernel support for encoder-decoder models (#9559) 2024-11-01 23:22:49 -07:00
__init__.py [Core] renamePromptInputs and inputs (#8876) 2024-09-26 20:35:15 -07:00
_custom_ops.py [Hardware][ROCM] using current_platform.is_rocm (#9642) 2024-10-28 04:07:00 +00:00
_ipex_ops.py [Hardware][intel GPU] bump up ipex version to 2.3 (#8365) 2024-09-13 16:54:34 -07:00
beam_search.py [Frontend] re-enable multi-modality input in the new beam search implementation (#9427) 2024-10-29 11:49:47 +00:00
block.py [mypy] Enable mypy type checking for vllm/core (#7229) 2024-08-28 07:11:14 +08:00
config.py [Misc] Remove deprecated arg for cuda graph capture (#9864) 2024-10-31 07:22:07 +00:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [torch.compile] rework compile control with piecewise cudagraph (#9715) 2024-10-29 23:03:49 -07:00
forward_context.py [misc] add forward context for attention (#9029) 2024-10-03 12:09:42 -07:00
logger.py [Misc] Add an env var VLLM_LOGGING_PREFIX, if set, it will be prepend to all logging messages (#9590) 2024-10-23 11:17:28 +08:00
logits_process.py [Frontend] Bad words sampling parameter (#9717) 2024-10-26 16:29:38 +00:00
outputs.py [core] move parallel sampling out from vllm core (#9302) 2024-10-22 00:31:44 +00:00
pooling_params.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix][Frontend] Reject guided decoding in multistep mode (#9892) 2024-11-01 01:09:46 +00:00
scalar_type.py [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
scripts.py [Frontend] Add Early Validation For Chat Template / Tool Call Parser (#9151) 2024-10-08 14:31:26 +00:00
sequence.py [Core][VLM] Add precise multi-modal placeholder tracking (#8346) 2024-11-01 16:21:10 -07:00
tracing.py [misc] hide best_of from engine (#9261) 2024-10-10 21:30:44 -07:00
utils.py [torch.compile] fix cpu broken code (#9947) 2024-11-01 23:35:47 -07:00
version.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00