vllm/vllm
2024-05-16 22:42:29 +00:00
..
attention [Bugfix] Fix FP8 KV cache support (#4869) 2024-05-16 22:42:29 +00:00
core [Scheduler] Warning upon preemption and Swapping (#4647) 2024-05-13 23:50:44 +09:00
distributed [Core][Distributed] remove graph mode function (#4818) 2024-05-16 10:59:52 -07:00
engine [Bugfix] Properly set distributed_executor_backend in ParallelConfig (#4816) 2024-05-15 07:22:09 -07:00
entrypoints [Bugfix] Bypass authorization API token for preflight requests (#4862) 2024-05-16 09:42:21 -07:00
executor [Speculative decoding][Re-take] Enable TP>1 speculative decoding (#4840) 2024-05-16 00:53:51 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Core] Faster startup for LoRA enabled models (#4634) 2024-05-08 10:33:18 -07:00
model_executor Add GPTQ Marlin 2:4 sparse structured support (#4790) 2024-05-16 12:56:15 -04:00
spec_decode [Speculative decoding][Re-take] Enable TP>1 speculative decoding (#4840) 2024-05-16 00:53:51 -07:00
transformers_utils [Model] Snowflake arctic model implementation (#4652) 2024-05-09 22:37:14 +00:00
usage [Frontend] Separate OpenAI Batch Runner usage from API Server (#4851) 2024-05-17 00:42:41 +09:00
worker [Misc] remove old comments (#4866) 2024-05-16 11:07:41 -07:00
__init__.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
_custom_ops.py [Kernel] Add w8a8 CUTLASS kernels (#4749) 2024-05-16 18:32:50 -04:00
block.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
config.py Add GPTQ Marlin 2:4 sparse structured support (#4790) 2024-05-16 12:56:15 -04:00
envs.py [Frontend] [Core] perf: Automatically detect vLLM-tensorized model, update tensorizer to version 2.9.0 (#4208) 2024-05-13 14:57:07 -07:00
logger.py [Misc] centralize all usage of environment variables (#4548) 2024-05-02 11:13:25 -07:00
outputs.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix] Use random seed if seed is -1 (#4531) 2024-05-01 10:41:17 -07:00
sequence.py [Core][2/N] Model runner refactoring part 2. Combine prepare prefill / decode to a single API (#4681) 2024-05-15 14:00:10 +09:00
utils.py [Kernel] Refactor FP8 kv-cache with NVIDIA float8_e4m3 support (#4535) 2024-05-09 18:04:17 -06:00