vllm/vllm/worker
Cody Yu 973617ae02
[Speculative decoding][Re-take] Enable TP>1 speculative decoding (#4840)
Co-authored-by: Cade Daniel <edacih@gmail.com>
Co-authored-by: Cade Daniel <cade@anyscale.com>
2024-05-16 00:53:51 -07:00
..
__init__.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
cache_engine.py [Misc] Enhance attention selector (#4751) 2024-05-13 10:47:25 -07:00
cpu_model_runner.py [Core][2/N] Model runner refactoring part 2. Combine prepare prefill / decode to a single API (#4681) 2024-05-15 14:00:10 +09:00
cpu_worker.py [Misc] Enhance attention selector (#4751) 2024-05-13 10:47:25 -07:00
embedding_model_runner.py [Core][2/N] Model runner refactoring part 2. Combine prepare prefill / decode to a single API (#4681) 2024-05-15 14:00:10 +09:00
model_runner.py [Core] Implement sharded state loader (#4690) 2024-05-15 22:11:54 -07:00
neuron_model_runner.py [Core][Model runner refactoring 1/N] Refactor attn metadata term (#4518) 2024-05-03 10:20:12 -07:00
neuron_worker.py [Core] RayWorkerVllm --> WorkerWrapper to reduce duplication (#4024) 2024-04-17 08:34:33 +00:00
worker_base.py [Speculative decoding][Re-take] Enable TP>1 speculative decoding (#4840) 2024-05-16 00:53:51 -07:00
worker.py [Speculative decoding][Re-take] Enable TP>1 speculative decoding (#4840) 2024-05-16 00:53:51 -07:00