vllm/vllm
2024-08-15 22:48:07 -07:00
..
adapter_commons [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
assets [Core][VLM] Support image embeddings as input (#6613) 2024-08-12 16:16:06 +08:00
attention register custom op for flash attn and use from torch.ops (#7536) 2024-08-15 22:38:56 -07:00
core [core] [3/N] multi-step args and sequence.py (#7452) 2024-08-14 12:32:45 -07:00
distributed [Bugfix][CI] Import ray under guard (#7486) 2024-08-13 17:12:58 -07:00
engine [Misc] Add quantization config support for speculative model. (#7343) 2024-08-15 19:34:28 -07:00
entrypoints [Core] Use uvloop with zmq-decoupled front-end (#7570) 2024-08-15 22:48:07 -07:00
executor [Bugfix] update neuron for version > 0.5.0 (#7175) 2024-08-15 09:44:14 -07:00
inputs [VLM][Core] Support profiling with multiple multi-modal inputs per prompt (#7126) 2024-08-14 17:55:42 +00:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Speculative decoding] [Multi-Step] decouple should_modify_greedy_probs_inplace (#6971) 2024-08-09 05:42:45 +00:00
model_executor [Bugfix] Fix default weight loading for scalars (#7534) 2024-08-15 13:10:22 -07:00
multimodal [VLM][Core] Support profiling with multiple multi-modal inputs per prompt (#7126) 2024-08-14 17:55:42 +00:00
platforms [hardware] unify usage of is_tpu to current_platform.is_tpu() (#7102) 2024-08-13 00:16:42 -07:00
plugins [misc][plugin] add plugin system implementation (#7426) 2024-08-13 16:24:17 -07:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
spec_decode [Core] Add span metrics for model_forward, scheduler and sampler time (#7089) 2024-08-09 13:55:13 -07:00
transformers_utils [mypy] Misc. typing improvements (#7417) 2024-08-13 09:20:20 +08:00
triton_utils [Kernel][RFC] Refactor the punica kernel based on Triton (#5036) 2024-07-31 17:12:24 -07:00
usage [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00
worker [Bugfix] update neuron for version > 0.5.0 (#7175) 2024-08-15 09:44:14 -07:00
__init__.py [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
_core_ext.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
_custom_ops.py [TPU] Suppress import custom_ops warning (#7458) 2024-08-13 00:30:30 -07:00
_ipex_ops.py [mypy] Enable following imports for some directories (#6681) 2024-07-31 10:38:03 +08:00
block.py [Performance] Optimize e2e overheads: Reduce python allocations (#7162) 2024-08-08 21:34:28 -07:00
config.py [Misc] Add quantization config support for speculative model. (#7343) 2024-08-15 19:34:28 -07:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [Bugfix][TPU] Correct env variable for XLA cache path (#7544) 2024-08-15 00:02:29 -07:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Bugfix] Fix weight loading for Chameleon when TP>1 (#7410) 2024-08-13 05:33:41 +00:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py Fix empty output when temp is too low (#2937) 2024-08-14 05:31:44 +00:00
scalar_type.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
scripts.py [Frontend] Disallow passing model as both argument and option (#7347) 2024-08-12 12:58:34 +00:00
sequence.py [core] [3/N] multi-step args and sequence.py (#7452) 2024-08-14 12:32:45 -07:00
tracing.py [Core] Add span metrics for model_forward, scheduler and sampler time (#7089) 2024-08-09 13:55:13 -07:00
utils.py [VLM][Core] Support profiling with multiple multi-modal inputs per prompt (#7126) 2024-08-14 17:55:42 +00:00
version.py bump version to v0.5.4 (#7139) 2024-08-05 14:39:48 -07:00