vllm/cacheflow
2023-05-21 17:04:18 -07:00
..
core Implement stop strings and best_of (#114) 2023-05-21 11:18:00 -07:00
entrypoints Introduce LLM class for offline inference (#115) 2023-05-21 17:04:18 -07:00
model_executor Implement stop strings and best_of (#114) 2023-05-21 11:18:00 -07:00
server Introduce LLM class for offline inference (#115) 2023-05-21 17:04:18 -07:00
worker Refactor system architecture (#109) 2023-05-20 13:06:59 -07:00
__init__.py Introduce LLM class for offline inference (#115) 2023-05-21 17:04:18 -07:00
block.py Add docstrings to some modules and classes (#100) 2023-05-14 22:32:38 -07:00
config.py Introduce LLM class for offline inference (#115) 2023-05-21 17:04:18 -07:00
logger.py Add a system logger (#85) 2023-05-08 23:03:35 -07:00
outputs.py Introduce LLM class for offline inference (#115) 2023-05-21 17:04:18 -07:00
sampling_params.py Implement stop strings and best_of (#114) 2023-05-21 11:18:00 -07:00
sequence.py Implement stop strings and best_of (#114) 2023-05-21 11:18:00 -07:00
utils.py Refactor system architecture (#82) 2023-05-09 15:30:12 -07:00