vllm/vllm
Philipp Moritz 17c3103c56
Make it easy to profile workers with nsight (#3162)
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
2024-03-03 16:19:13 -08:00
..
core [FIX] Fix styles in automatic prefix caching & add a automatic prefix caching benchmark (#3158) 2024-03-03 14:37:18 -08:00
engine Make it easy to profile workers with nsight (#3162) 2024-03-03 16:19:13 -08:00
entrypoints Add vLLM version info to logs and openai API server (#3161) 2024-03-02 21:00:29 -08:00
lora [Neuron] Support inference with transformers-neuronx (#2569) 2024-02-28 09:34:34 -08:00
model_executor Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
transformers_utils Support starcoder2 architecture (#3089) 2024-02-29 00:51:48 -08:00
worker Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
__init__.py Bump up to v0.3.3 (#3129) 2024-03-01 12:58:06 -08:00
block.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
config.py Make it easy to profile workers with nsight (#3162) 2024-03-03 16:19:13 -08:00
logger.py Make vLLM logging formatting optional (#2877) 2024-02-20 14:38:55 -08:00
outputs.py Add metrics to RequestOutput (#2876) 2024-02-20 21:55:57 -08:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Fix] Don't deep-copy LogitsProcessors when copying SamplingParams (#3099) 2024-02-29 19:20:42 +00:00
sequence.py [FIX] Fix styles in automatic prefix caching & add a automatic prefix caching benchmark (#3158) 2024-03-03 14:37:18 -08:00
test_utils.py Use CuPy for CUDA graphs (#2811) 2024-02-13 11:32:06 -08:00
utils.py [Neuron] Support inference with transformers-neuronx (#2569) 2024-02-28 09:34:34 -08:00