add FAQ doc under 'serving' (#5946)
This commit is contained in:
parent
12a59959ed
commit
83bdcb6ac3
@ -84,6 +84,7 @@ Documentation
|
|||||||
serving/usage_stats
|
serving/usage_stats
|
||||||
serving/integrations
|
serving/integrations
|
||||||
serving/tensorizer
|
serving/tensorizer
|
||||||
|
serving/faq
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|||||||
12
docs/source/serving/faq.rst
Normal file
12
docs/source/serving/faq.rst
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
Frequently Asked Questions
|
||||||
|
========================
|
||||||
|
|
||||||
|
Q: How can I serve multiple models on a single port using the OpenAI API?
|
||||||
|
|
||||||
|
A: Assuming that you're referring to using OpenAI compatible server to serve multiple models at once, that is not currently supported, you can run multiple instances of the server (each serving a different model) at the same time, and have another layer to route the incoming request to the correct server accordingly.
|
||||||
|
|
||||||
|
----------------------------------------
|
||||||
|
|
||||||
|
Q: Which model to use for offline inference embedding?
|
||||||
|
|
||||||
|
A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model
|
||||||
Loading…
Reference in New Issue
Block a user