vllm/tests/entrypoints/openai
2024-07-31 21:13:34 -07:00
..
__init__.py [CI/Build] [3/3] Reorganize entrypoints tests (#5966) 2024-06-30 12:58:49 +08:00
conftest.py [Misc] add fixture to guided processor tests (#6341) 2024-07-12 09:55:39 -07:00
test_basic.py [Bugfix][Frontend] Fix missing /metrics endpoint (#6463) 2024-07-19 03:55:13 +00:00
test_chat.py [Frontend] Add Usage data in each chunk for chat_serving. #6540 (#6652) 2024-07-23 11:41:55 -07:00
test_completion.py [Frontend] New allowed_token_ids decoding request parameter (#6753) 2024-07-29 23:37:27 +00:00
test_embedding.py [Bugfix] Fix encoding_format in examples/openai_embedding_client.py (#6755) 2024-07-24 22:48:07 -07:00
test_guided_processors.py [Misc] add fixture to guided processor tests (#6341) 2024-07-12 09:55:39 -07:00
test_models.py [Doc][CI/Build] Update docs and tests to use vllm serve (#6431) 2024-07-17 07:43:21 +00:00
test_oot_registration.py [CI/Build] [3/3] Reorganize entrypoints tests (#5966) 2024-06-30 12:58:49 +08:00
test_return_tokens_as_ids.py [Frontend] Represent tokens with identifiable strings (#6626) 2024-07-25 09:51:00 +08:00
test_run_batch.py [BugFix] BatchResponseData body should be optional (#6345) 2024-07-15 04:06:09 +00:00
test_serving_chat.py [Bugfix] Set SamplingParams.max_tokens for OpenAI requests if not provided by user (#6954) 2024-07-31 21:13:34 -07:00
test_tokenization.py [BugFix][Frontend] Use LoRA tokenizer in OpenAI APIs (#6227) 2024-07-18 15:13:30 +08:00
test_vision.py [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00