vllm/vllm
Robert Shaw c0c2335ce0
Integrate Marlin Kernels for Int4 GPTQ inference (#2497)
Co-authored-by: Robert Shaw <114415538+rib-2@users.noreply.github.com>
Co-authored-by: alexm <alexm@neuralmagic.com>
2024-03-01 12:47:51 -08:00
..
core chore(vllm): codespell for spell checking (#2820) 2024-02-21 18:56:01 -08:00
engine Fix: Output text is always truncated in some models (#3016) 2024-03-01 07:52:22 +00:00
entrypoints fix relative import path of protocol.py (#3134) 2024-03-01 19:42:06 +00:00
lora [Neuron] Support inference with transformers-neuronx (#2569) 2024-02-28 09:34:34 -08:00
model_executor Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
transformers_utils Support starcoder2 architecture (#3089) 2024-02-29 00:51:48 -08:00
worker [Neuron] Support inference with transformers-neuronx (#2569) 2024-02-28 09:34:34 -08:00
__init__.py Bump up version to v0.3.2 (#2968) 2024-02-21 11:47:25 -08:00
block.py [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
config.py Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
logger.py Make vLLM logging formatting optional (#2877) 2024-02-20 14:38:55 -08:00
outputs.py Add metrics to RequestOutput (#2876) 2024-02-20 21:55:57 -08:00
prefix.py [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Fix] Don't deep-copy LogitsProcessors when copying SamplingParams (#3099) 2024-02-29 19:20:42 +00:00
sequence.py Support per-request seed (#2514) 2024-02-21 11:47:00 -08:00
test_utils.py Use CuPy for CUDA graphs (#2811) 2024-02-13 11:32:06 -08:00
utils.py [Neuron] Support inference with transformers-neuronx (#2569) 2024-02-28 09:34:34 -08:00