vllm/vllm
2024-03-13 14:18:40 -07:00
..
core Fixes #1556 double free (#3347) 2024-03-13 00:30:08 +00:00
engine Add distributed model executor abstraction (#3191) 2024-03-11 11:03:45 -07:00
entrypoints Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
executor [FIX] Simpler fix for async engine running on ray (#3371) 2024-03-13 14:18:40 -07:00
lora Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
model_executor Fix lint (#3388) 2024-03-13 13:56:49 -07:00
spec_decode Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
transformers_utils Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
worker Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
__init__.py Add distributed model executor abstraction (#3191) 2024-03-11 11:03:45 -07:00
block.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
config.py [Fix] Fix quantization="gptq" when using Marlin (#3319) 2024-03-12 22:51:42 -07:00
logger.py Make vLLM logging formatting optional (#2877) 2024-02-20 14:38:55 -08:00
outputs.py [Fix] Fix best_of behavior when n=1 (#3298) 2024-03-10 19:17:46 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
sequence.py Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
test_utils.py Use CuPy for CUDA graphs (#2811) 2024-02-13 11:32:06 -08:00
utils.py Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00