| .. |
|
adapter_commons
|
[CORE] Adding support for insertion of soft-tuned prompts (#4645)
|
2024-07-09 13:26:36 -07:00 |
|
assets
|
[Misc] Manage HTTP connections in one place (#6600)
|
2024-07-22 21:32:02 -07:00 |
|
attention
|
[Bugfix] Fix decode tokens w. CUDA graph (#6757)
|
2024-07-24 22:33:56 -07:00 |
|
core
|
[Misc] Small perf improvements (#6520)
|
2024-07-19 12:10:56 -07:00 |
|
distributed
|
[Bugfix] Add synchronize to prevent possible data race (#6788)
|
2024-07-25 10:40:01 -07:00 |
|
engine
|
[Bugfix] Miscalculated latency lead to time_to_first_token_seconds inaccurate. (#6686)
|
2024-07-24 08:58:42 -07:00 |
|
entrypoints
|
[Bugfix] Add image placeholder for OpenAI Compatible Server of MiniCPM-V (#6787)
|
2024-07-25 09:42:49 -07:00 |
|
executor
|
[Core] Fix ray forward_dag error mssg (#6792)
|
2024-07-25 16:53:25 -07:00 |
|
inputs
|
[Frontend] Refactor prompt processing (#4028)
|
2024-07-22 10:13:53 -07:00 |
|
logging
|
[MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273)
|
2024-05-01 17:34:40 -07:00 |
|
lora
|
[Core] Support dynamically loading Lora adapter from HuggingFace (#6234)
|
2024-07-22 15:42:40 -07:00 |
|
model_executor
|
[Bugfix] Fix empty (nullptr) channelwise scales when loading wNa16 using compressed tensors (#6798)
|
2024-07-25 15:05:09 -07:00 |
|
multimodal
|
[Model] Adding support for MiniCPM-V (#4087)
|
2024-07-24 20:59:30 -07:00 |
|
platforms
|
[Misc] Add a wrapper for torch.inference_mode (#6618)
|
2024-07-21 18:43:11 -07:00 |
|
prompt_adapter
|
[CORE] Adding support for insertion of soft-tuned prompts (#4645)
|
2024-07-09 13:26:36 -07:00 |
|
spec_decode
|
[Bugfix] Miscalculated latency lead to time_to_first_token_seconds inaccurate. (#6686)
|
2024-07-24 08:58:42 -07:00 |
|
transformers_utils
|
Bump transformers version for Llama 3.1 hotfix and patch Chameleon (#6690)
|
2024-07-23 13:47:48 -07:00 |
|
triton_utils
|
[Bugfix] Add custom Triton cache manager to resolve MoE MP issue (#6140)
|
2024-07-15 10:12:47 -07:00 |
|
usage
|
[Misc] Manage HTTP connections in one place (#6600)
|
2024-07-22 21:32:02 -07:00 |
|
worker
|
[Core] Tweaks to model runner/input builder developer APIs (#6712)
|
2024-07-24 12:17:12 -07:00 |
|
__init__.py
|
[Frontend] Refactor prompt processing (#4028)
|
2024-07-22 10:13:53 -07:00 |
|
_custom_ops.py
|
Add fp8 support to reshape_and_cache_flash (#6667)
|
2024-07-24 18:36:52 +00:00 |
|
_ipex_ops.py
|
[Kernel][Attention] Separate Attention.kv_scale into k_scale and v_scale (#6081)
|
2024-07-16 15:31:32 -07:00 |
|
block.py
|
[core][misc] remove logical block (#5882)
|
2024-06-27 13:34:55 -07:00 |
|
config.py
|
[bitsandbytes]: support read bnb pre-quantized model (#5753)
|
2024-07-23 23:45:09 +00:00 |
|
connections.py
|
[core][distributed] fix zmq hang (#6759)
|
2024-07-24 17:37:12 -07:00 |
|
envs.py
|
[Core] Introduce SPMD worker execution using Ray accelerated DAG (#6032)
|
2024-07-17 22:27:09 -07:00 |
|
logger.py
|
[Misc] add logging level env var (#5045)
|
2024-05-24 23:49:49 -07:00 |
|
outputs.py
|
[Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) (#5602)
|
2024-07-01 20:10:37 -07:00 |
|
pooling_params.py
|
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734)
|
2024-05-11 11:30:37 -07:00 |
|
py.typed
|
Add py.typed so consumers of vLLM can get type checking (#1509)
|
2023-10-30 14:50:47 -07:00 |
|
sampling_params.py
|
[Misc] Remove deprecation warning for beam search (#6659)
|
2024-07-23 00:21:58 +00:00 |
|
scripts.py
|
[Frontend] split run_server into build_server and run_server (#6740)
|
2024-07-24 10:36:04 -07:00 |
|
sequence.py
|
[Frontend] Refactor prompt processing (#4028)
|
2024-07-22 10:13:53 -07:00 |
|
tracing.py
|
[Misc] Add OpenTelemetry support (#4687)
|
2024-06-19 01:17:03 +09:00 |
|
utils.py
|
Add fp8 support to reshape_and_cache_flash (#6667)
|
2024-07-24 18:36:52 +00:00 |
|
version.py
|
Bump version to 0.5.3.post1 (#6696)
|
2024-07-23 10:08:59 -07:00 |