vllm/tests/entrypoints/openai
2024-11-01 14:09:07 +00:00
..
__init__.py [CI/Build] [3/3] Reorganize entrypoints tests (#5966) 2024-06-30 12:58:49 +08:00
test_accuracy.py Add output streaming support to multi-step + async while ensuring RequestOutput obj reuse (#8335) 2024-09-23 15:38:04 -07:00
test_audio.py [Misc][OpenAI] deprecate max_tokens in favor of new max_completion_tokens field for chat completion endpoint (#9837) 2024-10-30 18:15:56 -07:00
test_basic.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
test_chat_template.py [Frontend] Added support for HF's new continue_final_message parameter (#8942) 2024-09-29 17:59:47 +00:00
test_chat.py [Misc][OpenAI] deprecate max_tokens in favor of new max_completion_tokens field for chat completion endpoint (#9837) 2024-10-30 18:15:56 -07:00
test_chunked_prompt.py [Bugfix] Fix vLLM UsageInfo and logprobs None AssertionError with empty token_ids (#9034) 2024-10-15 15:40:43 -07:00
test_cli_args.py [Frontend] Add Early Validation For Chat Template / Tool Call Parser (#9151) 2024-10-08 14:31:26 +00:00
test_completion.py [Bugfix][Frontend] Guard against bad token ids (#9634) 2024-10-29 14:13:20 -07:00
test_embedding.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
test_encoder_decoder.py [Tests] Disable retries and use context manager for openai client (#7565) 2024-08-26 21:33:17 -07:00
test_lora_lineage.py [Core] Support Lora lineage and base model metadata management (#6315) 2024-09-20 06:20:56 +00:00
test_metrics.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
test_models.py [Core] Support Lora lineage and base model metadata management (#6315) 2024-09-20 06:20:56 +00:00
test_oot_registration.py [misc][plugin] add plugin system implementation (#7426) 2024-08-13 16:24:17 -07:00
test_prompt_validation.py [Bugfix][Frontend] Reject guided decoding in multistep mode (#9892) 2024-11-01 01:09:46 +00:00
test_return_tokens_as_ids.py [Tests] Disable retries and use context manager for openai client (#7565) 2024-08-26 21:33:17 -07:00
test_run_batch.py [Frontend] Create ErrorResponse instead of raising exceptions in run_batch (#8347) 2024-09-11 05:30:11 +00:00
test_serving_chat.py [Bugfix]: Make chat content text allow type content (#9358) 2024-10-24 05:05:49 +00:00
test_serving_engine.py [Core] Support Lora lineage and base model metadata management (#6315) 2024-09-20 06:20:56 +00:00
test_shutdown.py [CI/Build] Replaced some models on tests for smaller ones (#9570) 2024-10-22 04:52:14 +00:00
test_tokenization.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
test_vision_embedding.py [Frontend] Use a proper chat template for VLM2Vec (#9912) 2024-11-01 14:09:07 +00:00
test_vision.py [Misc][OpenAI] deprecate max_tokens in favor of new max_completion_tokens field for chat completion endpoint (#9837) 2024-10-30 18:15:56 -07:00