vllm/vllm
SnowDist a22dea54d3
[Model] Support MAP-NEO model (#5081)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-05-30 19:24:41 -07:00
..
attention [Model] Support MAP-NEO model (#5081) 2024-05-30 19:24:41 -07:00
core [Core] Cross-attention KV caching and memory-management (towards eventual encoder/decoder model support) (#4837) 2024-05-29 16:09:13 +00:00
distributed [Core][Distributed] improve p2p access check (#4992) 2024-05-29 11:29:07 +00:00
engine [Doc] Use intersphinx and update entrypoints docs (#5125) 2024-05-30 09:59:23 -07:00
entrypoints [Doc] Use intersphinx and update entrypoints docs (#5125) 2024-05-30 09:59:23 -07:00
executor [Core] Eliminate parallel worker per-step task scheduling overhead (#4894) 2024-05-23 06:17:27 +09:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Model] LoRA gptbigcode implementation (#3949) 2024-05-22 13:58:59 -07:00
model_executor [Bugfix] Avoid Warnings in SparseML Activation Quantization (#5120) 2024-05-30 17:04:37 -07:00
spec_decode [Dynamic Spec Decoding] Minor fix for disabling speculative decoding (#5000) 2024-05-25 10:00:14 -07:00
transformers_utils [Kernel][Backend][Model] Blocksparse flash attention kernel and Phi-3-Small model (#4799) 2024-05-24 22:00:52 -07:00
usage [Frontend] Separate OpenAI Batch Runner usage from API Server (#4851) 2024-05-17 00:42:41 +09:00
worker [Misc] remove duplicate definition of seq_lens_tensor in model_runner.py (#5129) 2024-05-30 06:56:19 -07:00
__init__.py Bump version to v0.4.3 (#5046) 2024-05-30 11:13:46 -07:00
_custom_ops.py [Kernel][Backend][Model] Blocksparse flash attention kernel and Phi-3-Small model (#4799) 2024-05-24 22:00:52 -07:00
block.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
config.py [Bugfix] Automatically Detect SparseML models (#5119) 2024-05-30 12:58:37 +00:00
envs.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
inputs.py [Core] Avoid the need to pass None values to Sequence.inputs (#5099) 2024-05-29 16:05:01 -07:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Core] Consolidate prompt arguments to LLM engines (#4328) 2024-05-28 13:29:31 -07:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Core]: Option To Use Prompt Token Ids Inside Logits Processor (#4985) 2024-05-23 22:04:24 +00:00
sequence.py [Core] Avoid the need to pass None values to Sequence.inputs (#5099) 2024-05-29 16:05:01 -07:00
utils.py [Bugfix][CI/Build] Fix test and improve code for merge_async_iterators (#5096) 2024-05-29 16:02:25 -07:00