| .. |
|
attention
|
[Core] Sliding window for block manager v2 (#4545)
|
2024-05-28 11:07:07 +09:00 |
|
core
|
[Core] Cross-attention KV caching and memory-management (towards eventual encoder/decoder model support) (#4837)
|
2024-05-29 16:09:13 +00:00 |
|
distributed
|
[Core][Distributed] improve p2p access check (#4992)
|
2024-05-29 11:29:07 +00:00 |
|
engine
|
[Bugfix] Remove the last EOS token unless explicitly specified (#5077)
|
2024-05-28 17:15:35 -07:00 |
|
entrypoints
|
[Bugfix] logprobs is not compatible with the OpenAI spec #4795 (#5031)
|
2024-05-29 16:13:22 -07:00 |
|
executor
|
[Core] Eliminate parallel worker per-step task scheduling overhead (#4894)
|
2024-05-23 06:17:27 +09:00 |
|
logging
|
[MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273)
|
2024-05-01 17:34:40 -07:00 |
|
lora
|
[Model] LoRA gptbigcode implementation (#3949)
|
2024-05-22 13:58:59 -07:00 |
|
model_executor
|
[Bugfix] gptq_marlin: Ensure g_idx_sort_indices is not a Parameter (#5108)
|
2024-05-30 00:30:18 +00:00 |
|
spec_decode
|
[Dynamic Spec Decoding] Minor fix for disabling speculative decoding (#5000)
|
2024-05-25 10:00:14 -07:00 |
|
transformers_utils
|
[Kernel][Backend][Model] Blocksparse flash attention kernel and Phi-3-Small model (#4799)
|
2024-05-24 22:00:52 -07:00 |
|
usage
|
[Frontend] Separate OpenAI Batch Runner usage from API Server (#4851)
|
2024-05-17 00:42:41 +09:00 |
|
worker
|
[Core][Optimization] remove vllm-nccl (#5091)
|
2024-05-29 05:13:52 +00:00 |
|
__init__.py
|
[Core] Consolidate prompt arguments to LLM engines (#4328)
|
2024-05-28 13:29:31 -07:00 |
|
_custom_ops.py
|
[Kernel][Backend][Model] Blocksparse flash attention kernel and Phi-3-Small model (#4799)
|
2024-05-24 22:00:52 -07:00 |
|
block.py
|
Add Automatic Prefix Caching (#2762)
|
2024-03-02 00:50:01 -08:00 |
|
config.py
|
[Bugfix / Core] Prefix Caching Guards (merged with main) (#4846)
|
2024-05-27 15:18:17 -07:00 |
|
envs.py
|
[Misc] add logging level env var (#5045)
|
2024-05-24 23:49:49 -07:00 |
|
inputs.py
|
[Core] Avoid the need to pass None values to Sequence.inputs (#5099)
|
2024-05-29 16:05:01 -07:00 |
|
logger.py
|
[Misc] add logging level env var (#5045)
|
2024-05-24 23:49:49 -07:00 |
|
outputs.py
|
[Core] Consolidate prompt arguments to LLM engines (#4328)
|
2024-05-28 13:29:31 -07:00 |
|
pooling_params.py
|
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734)
|
2024-05-11 11:30:37 -07:00 |
|
py.typed
|
Add py.typed so consumers of vLLM can get type checking (#1509)
|
2023-10-30 14:50:47 -07:00 |
|
sampling_params.py
|
[Core]: Option To Use Prompt Token Ids Inside Logits Processor (#4985)
|
2024-05-23 22:04:24 +00:00 |
|
sequence.py
|
[Core] Avoid the need to pass None values to Sequence.inputs (#5099)
|
2024-05-29 16:05:01 -07:00 |
|
utils.py
|
[Bugfix][CI/Build] Fix test and improve code for merge_async_iterators (#5096)
|
2024-05-29 16:02:25 -07:00 |