vllm/vllm/model_executor
Thomas Parnell e2afb03c92
[Bugfix] Enable loading FP8 checkpoints for gpt_bigcode models (#5460)
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
2024-06-14 20:28:11 +00:00
..
guided_decoding [Frontend][Core] Update Outlines Integration from FSM to Guide (#4109) 2024-06-05 16:49:12 -07:00
layers [ Misc ] Rs/compressed tensors cleanup (#5432) 2024-06-14 10:01:46 -07:00
model_loader [Frontend] [Core] Support for sharded tensorized models (#4990) 2024-06-12 14:13:52 -07:00
models [Bugfix] Enable loading FP8 checkpoints for gpt_bigcode models (#5460) 2024-06-14 20:28:11 +00:00
__init__.py [Core] Refactor Attention Take 2 (#3462) 2024-03-25 04:39:33 +00:00
custom_op.py [Hardware] Initial TPU integration (#5292) 2024-06-12 11:53:03 -07:00
pooling_metadata.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
sampling_metadata.py [Core] Avoid copying prompt/output tokens if no penalties are used (#5289) 2024-06-06 18:12:00 -07:00
utils.py [Hardware][Neuron] Refactor neuron support (#3471) 2024-03-22 01:22:17 +00:00