vllm/vllm
2024-06-21 22:25:14 -07:00
..
attention [Hardware][Intel GPU] Add Intel GPU(XPU) inference backend (#3814) 2024-06-17 11:01:25 -07:00
core [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
distributed [Core][Distributed] add shm broadcast (#5399) 2024-06-21 05:12:35 +00:00
engine [LoRA] Add support for pinning lora adapters in the LRU cache (#5603) 2024-06-21 15:42:46 -07:00
entrypoints [Misc] Remove #4789 workaround left in vllm/entrypoints/openai/run_batch.py (#5756) 2024-06-22 03:33:12 +00:00
executor [Bugfix] Fix pin_lora error in TPU executor (#5760) 2024-06-21 22:25:14 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [LoRA] Add support for pinning lora adapters in the LRU cache (#5603) 2024-06-21 15:42:46 -07:00
model_executor [Model] Support Qwen-VL and Qwen-VL-Chat models with text-only inputs (#5710) 2024-06-22 02:07:08 +00:00
multimodal [Model] Initialize Phi-3-vision support (#4986) 2024-06-17 19:34:33 -07:00
spec_decode [Model] MLPSpeculator speculative decoding support (#4947) 2024-06-20 20:23:12 -04:00
transformers_utils [Model] MLPSpeculator speculative decoding support (#4947) 2024-06-20 20:23:12 -04:00
usage [Misc] Add vLLM version getter to utils (#5098) 2024-06-13 11:21:39 -07:00
worker [LoRA] Add support for pinning lora adapters in the LRU cache (#5603) 2024-06-21 15:42:46 -07:00
__init__.py [Misc] Add vLLM version getter to utils (#5098) 2024-06-13 11:21:39 -07:00
_custom_ops.py [Bugfix] Fix the CUDA version check for FP8 support in the CUTLASS kernels (#5715) 2024-06-20 18:36:10 +00:00
_ipex_ops.py [Kernel][CPU] Add Quick gelu to CPU (#5717) 2024-06-21 06:39:40 +00:00
block.py [misc][typo] fix typo (#5620) 2024-06-17 20:54:57 -07:00
config.py [Model] MLPSpeculator speculative decoding support (#4947) 2024-06-20 20:23:12 -04:00
envs.py [Core][Distributed] add shm broadcast (#5399) 2024-06-21 05:12:35 +00:00
inputs.py [Bugfix] TYPE_CHECKING for MultiModalData (#5444) 2024-06-12 14:08:52 -07:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Core] Consolidate prompt arguments to LLM engines (#4328) 2024-05-28 13:29:31 -07:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Core]: Option To Use Prompt Token Ids Inside Logits Processor (#4985) 2024-05-23 22:04:24 +00:00
sequence.py [Model] MLPSpeculator speculative decoding support (#4947) 2024-06-20 20:23:12 -04:00
tracing.py [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
utils.py [LoRA] Add support for pinning lora adapters in the LRU cache (#5603) 2024-06-21 15:42:46 -07:00
version.py bump version to v0.5.0.post1 (#5522) 2024-06-13 19:42:06 -07:00