| .. |
|
adapter_commons
|
[CORE] Adding support for insertion of soft-tuned prompts (#4645)
|
2024-07-09 13:26:36 -07:00 |
|
attention
|
[Bugfix] Fix hard-coded value of x in context_attention_fwd (#6373)
|
2024-07-12 18:30:54 -07:00 |
|
core
|
[CORE] Adding support for insertion of soft-tuned prompts (#4645)
|
2024-07-09 13:26:36 -07:00 |
|
distributed
|
[distributed][misc] be consistent with pytorch for libcudart.so (#6346)
|
2024-07-11 19:35:17 -07:00 |
|
engine
|
[Bugfix] Fix usage stats logging exception warning with OpenVINO (#6349)
|
2024-07-12 10:47:00 +08:00 |
|
entrypoints
|
[BugFix] BatchResponseData body should be optional (#6345)
|
2024-07-15 04:06:09 +00:00 |
|
executor
|
[ci] try to add multi-node tests (#6280)
|
2024-07-12 21:51:48 -07:00 |
|
inputs
|
[Doc] Move guide for multimodal model and other improvements (#6168)
|
2024-07-06 17:18:59 +08:00 |
|
logging
|
[MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273)
|
2024-05-01 17:34:40 -07:00 |
|
lora
|
[CORE] Adding support for insertion of soft-tuned prompts (#4645)
|
2024-07-09 13:26:36 -07:00 |
|
model_executor
|
[core][distributed] simplify code to support pipeline parallel (#6406)
|
2024-07-14 21:20:51 -07:00 |
|
multimodal
|
[Doc] Guide for adding multi-modal plugins (#6205)
|
2024-07-10 14:55:34 +08:00 |
|
platforms
|
[CI/Build] Enable mypy typing for remaining folders (#6268)
|
2024-07-10 22:15:55 +08:00 |
|
prompt_adapter
|
[CORE] Adding support for insertion of soft-tuned prompts (#4645)
|
2024-07-09 13:26:36 -07:00 |
|
spec_decode
|
[Speculative Decoding] Enabling bonus token in speculative decoding for KV cache based models (#5765)
|
2024-07-10 16:02:47 -07:00 |
|
transformers_utils
|
[ BugFix ] Prompt Logprobs Detokenization (#6223)
|
2024-07-11 22:02:29 +00:00 |
|
usage
|
Report usage for beam search (#6404)
|
2024-07-14 19:37:35 -07:00 |
|
worker
|
[ROCm][AMD] unify CUDA_VISIBLE_DEVICES usage in cuda/rocm (#6352)
|
2024-07-11 21:30:46 -07:00 |
|
__init__.py
|
[Misc] Add generated git commit hash as vllm.__commit__ (#6386)
|
2024-07-12 22:52:15 +00:00 |
|
_custom_ops.py
|
[Kernel] Expand FP8 support to Ampere GPUs using FP8 Marlin (#5975)
|
2024-07-03 17:38:00 +00:00 |
|
_ipex_ops.py
|
[Kernel][CPU] Add Quick gelu to CPU (#5717)
|
2024-06-21 06:39:40 +00:00 |
|
block.py
|
[core][misc] remove logical block (#5882)
|
2024-06-27 13:34:55 -07:00 |
|
config.py
|
[ROCm][AMD] unify CUDA_VISIBLE_DEVICES usage in cuda/rocm (#6352)
|
2024-07-11 21:30:46 -07:00 |
|
envs.py
|
[Doc] add env docs for flashinfer backend (#6437)
|
2024-07-14 21:16:51 -07:00 |
|
logger.py
|
[Misc] add logging level env var (#5045)
|
2024-05-24 23:49:49 -07:00 |
|
outputs.py
|
[Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) (#5602)
|
2024-07-01 20:10:37 -07:00 |
|
pooling_params.py
|
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734)
|
2024-05-11 11:30:37 -07:00 |
|
py.typed
|
Add py.typed so consumers of vLLM can get type checking (#1509)
|
2023-10-30 14:50:47 -07:00 |
|
sampling_params.py
|
Report usage for beam search (#6404)
|
2024-07-14 19:37:35 -07:00 |
|
scripts.py
|
[Feature] vLLM CLI (#5090)
|
2024-07-14 15:36:43 -07:00 |
|
sequence.py
|
[Speculative Decoding] Enabling bonus token in speculative decoding for KV cache based models (#5765)
|
2024-07-10 16:02:47 -07:00 |
|
tracing.py
|
[Misc] Add OpenTelemetry support (#4687)
|
2024-06-19 01:17:03 +09:00 |
|
utils.py
|
[ROCm][AMD] unify CUDA_VISIBLE_DEVICES usage in cuda/rocm (#6352)
|
2024-07-11 21:30:46 -07:00 |
|
version.py
|
[Misc] Add generated git commit hash as vllm.__commit__ (#6386)
|
2024-07-12 22:52:15 +00:00 |