vllm/vllm
2024-07-08 10:32:57 +08:00
..
attention [Bugfix] Add verbose error if scipy is missing for blocksparse attention (#5695) 2024-07-05 10:41:01 -07:00
core [Model] Jamba support (#4115) 2024-07-02 23:11:29 +00:00
distributed [core][distributed] support n layers % pp size != 0 (#6115) 2024-07-03 16:40:31 -07:00
engine [vlm] Remove vision language config. (#6089) 2024-07-03 22:14:16 +00:00
entrypoints do not exclude object field in CompletionStreamResponse (#6196) 2024-07-08 10:32:57 +08:00
executor [Distributed][Core] Support Py39 and Py38 for PP (#6120) 2024-07-03 17:52:29 -07:00
inputs [Doc] Move guide for multimodal model and other improvements (#6168) 2024-07-06 17:18:59 +08:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [hardware][misc] introduce platform abstraction (#6080) 2024-07-02 20:12:22 -07:00
model_executor [ Misc ] Support Fp8 via llm-compressor (#6110) 2024-07-07 20:42:11 +00:00
multimodal [Doc] Move guide for multimodal model and other improvements (#6168) 2024-07-06 17:18:59 +08:00
platforms [hardware][misc] introduce platform abstraction (#6080) 2024-07-02 20:12:22 -07:00
spec_decode [vlm] Remove vision language config. (#6089) 2024-07-03 22:14:16 +00:00
transformers_utils [Core] Dynamic image size support for VLMs (#5276) 2024-07-02 20:34:00 -07:00
usage [Misc] Add vLLM version getter to utils (#5098) 2024-06-13 11:21:39 -07:00
worker [VLM] Calculate maximum number of multi-modal tokens by model (#6121) 2024-07-04 16:37:23 -07:00
__init__.py [Misc] Add vLLM version getter to utils (#5098) 2024-06-13 11:21:39 -07:00
_custom_ops.py [Kernel] Expand FP8 support to Ampere GPUs using FP8 Marlin (#5975) 2024-07-03 17:38:00 +00:00
_ipex_ops.py [Kernel][CPU] Add Quick gelu to CPU (#5717) 2024-06-21 06:39:40 +00:00
block.py [core][misc] remove logical block (#5882) 2024-06-27 13:34:55 -07:00
config.py [core][distributed] support n layers % pp size != 0 (#6115) 2024-07-03 16:40:31 -07:00
envs.py [Bugfix] adding chunking mechanism to fused_moe to handle large inputs (#6029) 2024-07-01 21:08:29 +00:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) (#5602) 2024-07-01 20:10:37 -07:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [BugFix] Fix min_tokens behaviour for multiple eos tokens (#5849) 2024-06-27 11:31:11 -07:00
sequence.py [Core] Dynamic image size support for VLMs (#5276) 2024-07-02 20:34:00 -07:00
tracing.py [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
utils.py [Hardware][Intel CPU] Adding intel openmp tunings in Docker file (#6008) 2024-07-04 15:22:12 -07:00
version.py bump version to v0.5.1 (#6157) 2024-07-05 12:04:51 -07:00