..
async_engine
[Core] Pipeline Parallel Support ( #4412 )
2024-07-02 10:58:08 -07:00
basic_correctness
[Core] Pipeline Parallel Support ( #4412 )
2024-07-02 10:58:08 -07:00
core
[Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) ( #5602 )
2024-07-01 20:10:37 -07:00
distributed
[vlm] Remove vision language config. ( #6089 )
2024-07-03 22:14:16 +00:00
engine
[Core] Pipeline Parallel Support ( #4412 )
2024-07-02 10:58:08 -07:00
entrypoints
[Frontend] Continuous usage stats in OpenAI completion API ( #5742 )
2024-07-05 10:37:09 -07:00
fp8_kv
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) ( #3290 )
2024-04-03 14:15:55 -07:00
kernels
[Kernel] Correctly invoke prefill & decode kernels for cross-attention (towards eventual encoder/decoder model support) ( #4888 )
2024-07-08 17:12:15 +00:00
lora
[CORE] Adding support for insertion of soft-tuned prompts ( #4645 )
2024-07-09 13:26:36 -07:00
metrics
[Misc] Extend vLLM Metrics logging API ( #5925 )
2024-06-29 10:36:06 +08:00
model_executor
[CI/Build] Move test_utils.py to tests/utils.py ( #4425 )
2024-05-13 23:50:09 +09:00
models
[Bugfix] Mamba cache Cuda Graph padding ( #6214 )
2024-07-08 11:25:51 -07:00
multimodal
[Core] Dynamic image size support for VLMs ( #5276 )
2024-07-02 20:34:00 -07:00
prefix_caching
[mypy] Enable type checking for test directory ( #5017 )
2024-06-15 04:45:31 +00:00
prompt_adapter
[CORE] Adding support for insertion of soft-tuned prompts ( #4645 )
2024-07-09 13:26:36 -07:00
prompts
[BugFix] Fix input positions for long context with sliding window ( #2088 )
2023-12-13 12:28:13 -08:00
quantization
[ Misc ] Support Fp8 via llm-compressor ( #6110 )
2024-07-07 20:42:11 +00:00
samplers
[Speculative Decoding 2/2 ] Integrate typical acceptance sampler into Spec Decode Worker ( #5348 )
2024-07-01 00:33:05 -07:00
spec_decode
[CORE] Adding support for insertion of soft-tuned prompts ( #4645 )
2024-07-09 13:26:36 -07:00
tensorizer_loader
[Core] Pipeline Parallel Support ( #4412 )
2024-07-02 10:58:08 -07:00
tokenization
[VLM] Remove image_input_type from VLM config ( #5852 )
2024-07-02 07:57:09 +00:00
tracing
[Misc] Add OpenTelemetry support ( #4687 )
2024-06-19 01:17:03 +09:00
worker
[CORE] Adding support for insertion of soft-tuned prompts ( #4645 )
2024-07-09 13:26:36 -07:00
__init__.py
[Small] Formatter only checks lints in changed files ( #1528 )
2023-10-31 15:39:38 -07:00
conftest.py
[Core] Dynamic image size support for VLMs ( #5276 )
2024-07-02 20:34:00 -07:00
test_cache_block_hashing.py
[mypy] Enable type checking for test directory ( #5017 )
2024-06-15 04:45:31 +00:00
test_config.py
[Frontend] Customizable RoPE theta ( #5197 )
2024-06-11 10:42:26 -07:00
test_inputs.py
[Core] Consolidate prompt arguments to LLM engines ( #4328 )
2024-05-28 13:29:31 -07:00
test_logger.py
[mypy] Enable type checking for test directory ( #5017 )
2024-06-15 04:45:31 +00:00
test_logits_processor.py
[CORE] Quantized lm-head Framework ( #4442 )
2024-07-02 22:25:17 +00:00
test_regression.py
Bugfix: fix broken of download models from modelscope ( #5233 )
2024-06-06 09:28:10 -07:00
test_sampling_params.py
[Bugfix] fix crash if max_tokens=None ( #2570 )
2024-01-23 22:38:55 -08:00
test_sequence.py
[CI/Build] Move test_utils.py to tests/utils.py ( #4425 )
2024-05-13 23:50:09 +09:00
test_sharded_state_loader.py
[CI] Upgrade codespell version. ( #5381 )
2024-06-12 10:06:14 -07:00
test_utils.py
[CI/Build] Add unit testing for FlexibleArgumentParser ( #5798 )
2024-06-25 12:18:03 -07:00
utils.py
[Core] Pipeline Parallel Support ( #4412 )
2024-07-02 10:58:08 -07:00