| .. |
|
async_engine
|
[Frontend] Move chat utils (#6602)
|
2024-07-21 08:38:17 +08:00 |
|
basic_correctness
|
[Bugfix][CI/Build][Hardware][AMD] Fix AMD tests, add HF cache, update CK FA, add partially supported model notes (#6543)
|
2024-07-20 09:39:07 -07:00 |
|
core
|
[Core] Support dynamically loading Lora adapter from HuggingFace (#6234)
|
2024-07-22 15:42:40 -07:00 |
|
distributed
|
[Bugfix] fix flashinfer cudagraph capture for PP (#6708)
|
2024-07-24 01:49:44 +00:00 |
|
engine
|
[Frontend] Refactor prompt processing (#4028)
|
2024-07-22 10:13:53 -07:00 |
|
entrypoints
|
[Bugfix] Fix encoding_format in examples/openai_embedding_client.py (#6755)
|
2024-07-24 22:48:07 -07:00 |
|
fp8_kv
|
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290)
|
2024-04-03 14:15:55 -07:00 |
|
kernels
|
Add fp8 support to reshape_and_cache_flash (#6667)
|
2024-07-24 18:36:52 +00:00 |
|
lora
|
[Core] Support dynamically loading Lora adapter from HuggingFace (#6234)
|
2024-07-22 15:42:40 -07:00 |
|
metrics
|
[Bugfix] StatLoggers: cache spec decode metrics when they get collected. (#6645)
|
2024-07-23 23:05:05 +00:00 |
|
model_executor
|
[CI/Build] Move test_utils.py to tests/utils.py (#4425)
|
2024-05-13 23:50:09 +09:00 |
|
models
|
[Model] Adding support for MiniCPM-V (#4087)
|
2024-07-24 20:59:30 -07:00 |
|
multimodal
|
[Misc] Manage HTTP connections in one place (#6600)
|
2024-07-22 21:32:02 -07:00 |
|
prefix_caching
|
[mypy] Enable type checking for test directory (#5017)
|
2024-06-15 04:45:31 +00:00 |
|
prompt_adapter
|
[CORE] Adding support for insertion of soft-tuned prompts (#4645)
|
2024-07-09 13:26:36 -07:00 |
|
prompts
|
[BugFix] Fix input positions for long context with sliding window (#2088)
|
2023-12-13 12:28:13 -08:00 |
|
quantization
|
[bitsandbytes]: support read bnb pre-quantized model (#5753)
|
2024-07-23 23:45:09 +00:00 |
|
samplers
|
[Bugfix] Make spec. decode respect per-request seed. (#6034)
|
2024-07-18 19:22:08 -07:00 |
|
spec_decode
|
[Bugfix] Fix speculative decode seeded test (#6743)
|
2024-07-24 08:58:31 -07:00 |
|
tensorizer_loader
|
[Doc][CI/Build] Update docs and tests to use vllm serve (#6431)
|
2024-07-17 07:43:21 +00:00 |
|
tokenization
|
[Core] Allow specifying custom Executor (#6557)
|
2024-07-20 01:25:06 +00:00 |
|
tracing
|
[Misc] Add OpenTelemetry support (#4687)
|
2024-06-19 01:17:03 +09:00 |
|
worker
|
[Bugfix] Fix decode tokens w. CUDA graph (#6757)
|
2024-07-24 22:33:56 -07:00 |
|
__init__.py
|
[Small] Formatter only checks lints in changed files (#1528)
|
2023-10-31 15:39:38 -07:00 |
|
conftest.py
|
[Model] Adding support for MiniCPM-V (#4087)
|
2024-07-24 20:59:30 -07:00 |
|
test_cache_block_hashing.py
|
[mypy] Enable type checking for test directory (#5017)
|
2024-06-15 04:45:31 +00:00 |
|
test_config.py
|
[Bugfix] Bump transformers to 4.43.2 (#6752)
|
2024-07-24 13:22:16 -07:00 |
|
test_embedded_commit.py
|
[Misc] Add generated git commit hash as vllm.__commit__ (#6386)
|
2024-07-12 22:52:15 +00:00 |
|
test_inputs.py
|
[Core] Consolidate prompt arguments to LLM engines (#4328)
|
2024-05-28 13:29:31 -07:00 |
|
test_logger.py
|
[mypy] Enable type checking for test directory (#5017)
|
2024-06-15 04:45:31 +00:00 |
|
test_logits_processor.py
|
[CORE] Quantized lm-head Framework (#4442)
|
2024-07-02 22:25:17 +00:00 |
|
test_regression.py
|
Bugfix: fix broken of download models from modelscope (#5233)
|
2024-06-06 09:28:10 -07:00 |
|
test_sampling_params.py
|
[Bugfix] fix crash if max_tokens=None (#2570)
|
2024-01-23 22:38:55 -08:00 |
|
test_sequence.py
|
[CI/Build] Move test_utils.py to tests/utils.py (#4425)
|
2024-05-13 23:50:09 +09:00 |
|
test_sharded_state_loader.py
|
[CI] Upgrade codespell version. (#5381)
|
2024-06-12 10:06:14 -07:00 |
|
test_utils.py
|
[CI/Build] Add unit testing for FlexibleArgumentParser (#5798)
|
2024-06-25 12:18:03 -07:00 |
|
utils.py
|
[ci][test] add correctness test for cpu offloading (#6549)
|
2024-07-18 23:41:06 +00:00 |