| .. |
|
core
|
chore(vllm): codespell for spell checking (#2820)
|
2024-02-21 18:56:01 -08:00 |
|
engine
|
[Neuron] Support inference with transformers-neuronx (#2569)
|
2024-02-28 09:34:34 -08:00 |
|
entrypoints
|
Support logit bias for OpenAI API (#3027)
|
2024-02-27 11:51:53 +08:00 |
|
lora
|
[Neuron] Support inference with transformers-neuronx (#2569)
|
2024-02-28 09:34:34 -08:00 |
|
model_executor
|
Add LoRA support for Gemma (#3050)
|
2024-02-28 13:03:28 -08:00 |
|
transformers_utils
|
[Minor] Remove unused config files (#3039)
|
2024-02-26 17:25:22 -08:00 |
|
worker
|
[Neuron] Support inference with transformers-neuronx (#2569)
|
2024-02-28 09:34:34 -08:00 |
|
__init__.py
|
Bump up version to v0.3.2 (#2968)
|
2024-02-21 11:47:25 -08:00 |
|
block.py
|
[Experimental] Prefix Caching Support (#1669)
|
2024-01-17 16:32:10 -08:00 |
|
config.py
|
[Neuron] Support inference with transformers-neuronx (#2569)
|
2024-02-28 09:34:34 -08:00 |
|
logger.py
|
Make vLLM logging formatting optional (#2877)
|
2024-02-20 14:38:55 -08:00 |
|
outputs.py
|
Add metrics to RequestOutput (#2876)
|
2024-02-20 21:55:57 -08:00 |
|
prefix.py
|
[Experimental] Add multi-LoRA support (#1804)
|
2024-01-23 15:26:37 -08:00 |
|
py.typed
|
Add py.typed so consumers of vLLM can get type checking (#1509)
|
2023-10-30 14:50:47 -07:00 |
|
sampling_params.py
|
Support per-request seed (#2514)
|
2024-02-21 11:47:00 -08:00 |
|
sequence.py
|
Support per-request seed (#2514)
|
2024-02-21 11:47:00 -08:00 |
|
test_utils.py
|
Use CuPy for CUDA graphs (#2811)
|
2024-02-13 11:32:06 -08:00 |
|
utils.py
|
[Neuron] Support inference with transformers-neuronx (#2569)
|
2024-02-28 09:34:34 -08:00 |