vllm/vllm/model_executor
ElizaWszola d081da0064
[Bugfix] Fix Marlin MoE act order when is_k_full == False (#8741)
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
2024-09-28 18:19:40 -07:00
..
guided_decoding Revert "[Misc][Bugfix] Disable guided decoding for mistral tokenizer" (#8593) 2024-09-19 04:14:28 +00:00
layers [Bugfix] Fix Marlin MoE act order when is_k_full == False (#8741) 2024-09-28 18:19:40 -07:00
model_loader [[Misc]Upgrade bitsandbytes to the latest version 0.44.0 (#8768) 2024-09-24 17:08:55 -07:00
models [Bugfix][VLM] Fix Fuyu batching inference with max_num_seqs>1 (#8892) 2024-09-27 01:15:58 -07:00
__init__.py [Performance] Optimize e2e overheads: Reduce python allocations (#7162) 2024-08-08 21:34:28 -07:00
custom_op.py [torch.compile] add a flag to disable custom op (#8488) 2024-09-14 13:07:16 -07:00
parameter.py [Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin (#7701) 2024-09-23 13:46:26 -04:00
pooling_metadata.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
sampling_metadata.py [refactor] remove triton based sampler (#8524) 2024-09-16 20:04:48 -07:00
utils.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00