vllm/vllm/model_executor
Isotr0py 58170d6503
[Hardware][CPU] Add embedding models support for CPU backend (#10193)
Signed-off-by: Isotr0py <2037008807@qq.com>
2024-11-11 08:54:28 +00:00
..
guided_decoding [Frontend] Bad words sampling parameter (#9717) 2024-10-26 16:29:38 +00:00
layers [Kernel][Triton] Add Triton implementation for scaled_mm_triton to support fp8 and int8 SmoothQuant, symmetric case (#9857) 2024-11-08 19:59:22 -05:00
model_loader [5/N] pass the whole config to model (#9983) 2024-11-09 14:17:28 +08:00
models [Hardware][CPU] Add embedding models support for CPU backend (#10193) 2024-11-11 08:54:28 +00:00
__init__.py [Performance] Optimize e2e overheads: Reduce python allocations (#7162) 2024-08-08 21:34:28 -07:00
custom_op.py [Hardware][Intel-Gaudi] Add Intel Gaudi (HPU) inference backend (#6143) 2024-11-06 01:09:10 -08:00
parameter.py [Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin (#7701) 2024-09-23 13:46:26 -04:00
pooling_metadata.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
sampling_metadata.py [Hardware][Intel-Gaudi] Add Intel Gaudi (HPU) inference backend (#6143) 2024-11-06 01:09:10 -08:00
utils.py [Hardware] using current_platform.seed_everything (#9785) 2024-10-29 14:47:44 +00:00