vllm/vllm
Philipp Moritz c9d852d601
[Misc] Remove Mixtral device="cuda" declarations (#4543)
Remove the device="cuda" declarations in mixtral as promised in #4343
2024-05-01 16:30:52 -07:00
..
attention [Misc]Add customized information for models (#4132) 2024-04-30 21:18:14 -07:00
core [Core] Enable prefix caching with block manager v2 enabled (#4142) 2024-05-01 11:20:32 -07:00
distributed [Core][Distributed] fix pynccl del error (#4508) 2024-05-01 15:23:06 -07:00
engine [Bugfix][Core] Fix and refactor logging stats (#4336) 2024-05-01 20:08:14 +00:00
entrypoints [Bugfix] Add validation for seed (#4529) 2024-05-01 19:31:22 +00:00
executor [Core] Add multiproc_worker_utils for multiprocessing-based workers (#4357) 2024-05-01 18:41:59 +00:00
lora [Kernel] Full Tensor Parallelism for LoRA Layers (#3524) 2024-04-27 00:03:48 -07:00
model_executor [Misc] Remove Mixtral device="cuda" declarations (#4543) 2024-05-01 16:30:52 -07:00
spec_decode [Speculative decoding] Add ngram prompt lookup decoding (#4237) 2024-05-01 11:13:03 -07:00
transformers_utils fix_tokenizer_snapshot_download_bug (#4493) 2024-04-30 16:38:50 -07:00
usage [mypy] Add mypy type annotation part 1 (#4006) 2024-04-12 14:35:50 -07:00
worker [Core] Refactoring sampler and support prompt logprob for chunked prefill (#4309) 2024-04-26 13:02:02 +00:00
__init__.py [Core] Move ray_utils.py from engine to executor package (#4347) 2024-04-25 06:52:22 +00:00
_custom_ops.py [Core]Refactor gptq_marlin ops (#4466) 2024-04-30 08:14:47 -04:00
block.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
config.py [Speculative decoding] Add ngram prompt lookup decoding (#4237) 2024-05-01 11:13:03 -07:00
logger.py [CI] Disable non-lazy string operation on logging (#4326) 2024-04-26 00:16:58 -07:00
outputs.py [BugFix] Fix handling of stop strings and stop token ids (#3672) 2024-04-11 15:34:12 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix] Use random seed if seed is -1 (#4531) 2024-05-01 10:41:17 -07:00
sequence.py Add more Prometheus metrics (#2764) 2024-04-28 15:59:33 -07:00
test_utils.py [Core][Refactor] move parallel_utils into vllm/distributed (#3950) 2024-04-10 15:33:30 -07:00
utils.py [Bugfix] Abort requests when the connection to /v1/completions is interrupted (#4363) 2024-04-27 09:48:37 -07:00