vllm/vllm
Nick Hill dfeb2ecc3a
[Misc] Include matched stop string/token in responses (#2976)
Co-authored-by: Sahil Suneja <sahilsuneja@gmail.com>
2024-03-25 17:31:32 -07:00
..
attention [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
core [Feature] Add vision language model support. (#3042) 2024-03-25 14:16:30 -07:00
engine [Misc] Include matched stop string/token in responses (#2976) 2024-03-25 17:31:32 -07:00
entrypoints [Misc] Include matched stop string/token in responses (#2976) 2024-03-25 17:31:32 -07:00
executor [Feature] Add vision language model support. (#3042) 2024-03-25 14:16:30 -07:00
lora [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
model_executor Optimize _get_ranks in Sampler (#3623) 2024-03-25 16:03:02 -07:00
spec_decode [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
transformers_utils [Feature] Add vision language model support. (#3042) 2024-03-25 14:16:30 -07:00
worker [Feature] Add vision language model support. (#3042) 2024-03-25 14:16:30 -07:00
__init__.py Add distributed model executor abstraction (#3191) 2024-03-11 11:03:45 -07:00
block.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
config.py [Feature] Add vision language model support. (#3042) 2024-03-25 14:16:30 -07:00
logger.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
outputs.py [Misc] Include matched stop string/token in responses (#2976) 2024-03-25 17:31:32 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py feat: implement the min_tokens sampling parameter (#3124) 2024-03-25 10:14:26 -07:00
sequence.py [Misc] Include matched stop string/token in responses (#2976) 2024-03-25 17:31:32 -07:00
test_utils.py Use CuPy for CUDA graphs (#2811) 2024-02-13 11:32:06 -08:00
utils.py [Feature] Add vision language model support. (#3042) 2024-03-25 14:16:30 -07:00