vllm/vllm/v1
2024-11-04 00:24:40 +00:00
..
attention [torch.compile] directly register custom op (#9896) 2024-10-31 21:56:09 -07:00
core [V1] Implement vLLM V1 [1/N] (#9289) 2024-10-22 01:24:07 -07:00
engine [2/N] executor pass the complete config to worker/modelrunner (#9938) 2024-11-02 07:35:05 -07:00
executor [V1] Fix Configs (#9971) 2024-11-04 00:24:40 +00:00
sample [V1] Support per-request seed (#9945) 2024-11-03 09:14:17 -08:00
tokenizer [V1] Implement vLLM V1 [1/N] (#9289) 2024-10-22 01:24:07 -07:00
worker [V1] Support per-request seed (#9945) 2024-11-03 09:14:17 -08:00
outputs.py [V1] Implement vLLM V1 [1/N] (#9289) 2024-10-22 01:24:07 -07:00
request.py [V1] Implement vLLM V1 [1/N] (#9289) 2024-10-22 01:24:07 -07:00