| .. |
|
attention
|
[Bugfix][Kernel] Add head size check for attention backend selection (#4944)
|
2024-05-21 15:33:25 -04:00 |
|
core
|
[Core] Fix scheduler considering "no LoRA" as "LoRA" (#4897)
|
2024-05-20 17:48:32 -07:00 |
|
distributed
|
[Core][Distributed] remove graph mode function (#4818)
|
2024-05-16 10:59:52 -07:00 |
|
engine
|
[Bugfix] Fix flag name for max_seq_len_to_capture (#4935)
|
2024-05-21 09:30:52 -07:00 |
|
entrypoints
|
[Frontend] OpenAI API server: Do not add bos token by default when encoding (#4688)
|
2024-05-16 18:47:22 -07:00 |
|
executor
|
[Speculative decoding][Re-take] Enable TP>1 speculative decoding (#4840)
|
2024-05-16 00:53:51 -07:00 |
|
logging
|
[MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273)
|
2024-05-01 17:34:40 -07:00 |
|
lora
|
[Lora] Support long context lora (#4787)
|
2024-05-18 16:05:23 +09:00 |
|
model_executor
|
[Model] Add Phi-2 LoRA support (#4886)
|
2024-05-21 14:24:17 +09:00 |
|
spec_decode
|
[Speculative decoding][Re-take] Enable TP>1 speculative decoding (#4840)
|
2024-05-16 00:53:51 -07:00 |
|
transformers_utils
|
[Lora] Support long context lora (#4787)
|
2024-05-18 16:05:23 +09:00 |
|
usage
|
[Frontend] Separate OpenAI Batch Runner usage from API Server (#4851)
|
2024-05-17 00:42:41 +09:00 |
|
worker
|
[Lora] Support long context lora (#4787)
|
2024-05-18 16:05:23 +09:00 |
|
__init__.py
|
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734)
|
2024-05-11 11:30:37 -07:00 |
|
_custom_ops.py
|
[Kernel] Add w8a8 CUTLASS kernels (#4749)
|
2024-05-16 18:32:50 -04:00 |
|
block.py
|
Add Automatic Prefix Caching (#2762)
|
2024-03-02 00:50:01 -08:00 |
|
config.py
|
[Lora] Support long context lora (#4787)
|
2024-05-18 16:05:23 +09:00 |
|
envs.py
|
[Misc]: allow user to specify port in distributed setting (#4914)
|
2024-05-20 17:45:06 +00:00 |
|
logger.py
|
[Misc] centralize all usage of environment variables (#4548)
|
2024-05-02 11:13:25 -07:00 |
|
outputs.py
|
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734)
|
2024-05-11 11:30:37 -07:00 |
|
pooling_params.py
|
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734)
|
2024-05-11 11:30:37 -07:00 |
|
py.typed
|
Add py.typed so consumers of vLLM can get type checking (#1509)
|
2023-10-30 14:50:47 -07:00 |
|
sampling_params.py
|
[Bugfix] Use random seed if seed is -1 (#4531)
|
2024-05-01 10:41:17 -07:00 |
|
sequence.py
|
[Core][2/N] Model runner refactoring part 2. Combine prepare prefill / decode to a single API (#4681)
|
2024-05-15 14:00:10 +09:00 |
|
utils.py
|
[Misc]: allow user to specify port in distributed setting (#4914)
|
2024-05-20 17:45:06 +00:00 |