vllm/vllm
Isotr0py 540c0368b1
[Model] Initialize Fuyu-8B support (#3924)
Co-authored-by: Roger Wang <ywang@roblox.com>
2024-07-14 05:27:14 +00:00
..
adapter_commons [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
attention [Bugfix] Fix hard-coded value of x in context_attention_fwd (#6373) 2024-07-12 18:30:54 -07:00
core [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
distributed [distributed][misc] be consistent with pytorch for libcudart.so (#6346) 2024-07-11 19:35:17 -07:00
engine [Bugfix] Fix usage stats logging exception warning with OpenVINO (#6349) 2024-07-12 10:47:00 +08:00
entrypoints [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
executor [ci] try to add multi-node tests (#6280) 2024-07-12 21:51:48 -07:00
inputs [Doc] Move guide for multimodal model and other improvements (#6168) 2024-07-06 17:18:59 +08:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
model_executor [Model] Initialize Fuyu-8B support (#3924) 2024-07-14 05:27:14 +00:00
multimodal [Doc] Guide for adding multi-modal plugins (#6205) 2024-07-10 14:55:34 +08:00
platforms [CI/Build] Enable mypy typing for remaining folders (#6268) 2024-07-10 22:15:55 +08:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
spec_decode [Speculative Decoding] Enabling bonus token in speculative decoding for KV cache based models (#5765) 2024-07-10 16:02:47 -07:00
transformers_utils [ BugFix ] Prompt Logprobs Detokenization (#6223) 2024-07-11 22:02:29 +00:00
usage [Misc] Add vLLM version getter to utils (#5098) 2024-06-13 11:21:39 -07:00
worker [ROCm][AMD] unify CUDA_VISIBLE_DEVICES usage in cuda/rocm (#6352) 2024-07-11 21:30:46 -07:00
__init__.py [Misc] Add generated git commit hash as vllm.__commit__ (#6386) 2024-07-12 22:52:15 +00:00
_custom_ops.py [Kernel] Expand FP8 support to Ampere GPUs using FP8 Marlin (#5975) 2024-07-03 17:38:00 +00:00
_ipex_ops.py [Kernel][CPU] Add Quick gelu to CPU (#5717) 2024-06-21 06:39:40 +00:00
block.py [core][misc] remove logical block (#5882) 2024-06-27 13:34:55 -07:00
config.py [ROCm][AMD] unify CUDA_VISIBLE_DEVICES usage in cuda/rocm (#6352) 2024-07-11 21:30:46 -07:00
envs.py [Misc] Add deprecation warning for beam search (#6402) 2024-07-13 11:52:22 -07:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) (#5602) 2024-07-01 20:10:37 -07:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Misc] Add deprecation warning for beam search (#6402) 2024-07-13 11:52:22 -07:00
sequence.py [Speculative Decoding] Enabling bonus token in speculative decoding for KV cache based models (#5765) 2024-07-10 16:02:47 -07:00
tracing.py [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
utils.py [ROCm][AMD] unify CUDA_VISIBLE_DEVICES usage in cuda/rocm (#6352) 2024-07-11 21:30:46 -07:00
version.py [Misc] Add generated git commit hash as vllm.__commit__ (#6386) 2024-07-12 22:52:15 +00:00