| .. |
|
async_engine
|
[Bugfix] Abort requests when the connection to /v1/completions is interrupted (#4363)
|
2024-04-27 09:48:37 -07:00 |
|
basic_correctness
|
[Bug fix][Core] assert num_new_tokens == 1 fails when SamplingParams.n is not 1 and max_tokens is large & Add tests for preemption (#4451)
|
2024-05-01 19:24:13 -07:00 |
|
core
|
[Core] Enable prefix caching with block manager v2 enabled (#4142)
|
2024-05-01 11:20:32 -07:00 |
|
distributed
|
[Core][Distributed] enable multiple tp group (#4512)
|
2024-05-02 04:28:21 +00:00 |
|
engine
|
[Core] Add multiproc_worker_utils for multiprocessing-based workers (#4357)
|
2024-05-01 18:41:59 +00:00 |
|
entrypoints
|
[Bugfix] Add validation for seed (#4529)
|
2024-05-01 19:31:22 +00:00 |
|
fp8_kv
|
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290)
|
2024-04-03 14:15:55 -07:00 |
|
kernels
|
[Bugfix][Kernel] allow non-power-of-two head sizes in prefix prefill (#4128)
|
2024-04-18 00:51:28 -07:00 |
|
lora
|
[Kernel] Full Tensor Parallelism for LoRA Layers (#3524)
|
2024-04-27 00:03:48 -07:00 |
|
metrics
|
[CI]Add regression tests to ensure the async engine generates metrics (#4524)
|
2024-05-01 19:57:12 -07:00 |
|
model_executor
|
[Core] Support offline use of local cache for models (#4374)
|
2024-04-27 09:59:55 -07:00 |
|
models
|
[Misc]Add customized information for models (#4132)
|
2024-04-30 21:18:14 -07:00 |
|
prefix_caching
|
[Core][Bugfix]Refactor block manager for better testability (#3492)
|
2024-03-27 23:59:28 -07:00 |
|
prompts
|
[BugFix] Fix input positions for long context with sliding window (#2088)
|
2023-12-13 12:28:13 -08:00 |
|
quantization
|
[Kernel] Marlin Expansion: Support AutoGPTQ Models with Marlin (#3922)
|
2024-04-29 09:35:34 -07:00 |
|
samplers
|
[Test] Add ignore_eos test (#4519)
|
2024-05-01 08:45:42 -04:00 |
|
spec_decode
|
[Bug fix][Core] assert num_new_tokens == 1 fails when SamplingParams.n is not 1 and max_tokens is large & Add tests for preemption (#4451)
|
2024-05-01 19:24:13 -07:00 |
|
tensorizer_loader
|
[Core][Distributed] use cpu group to broadcast metadata in cpu (#4444)
|
2024-04-29 13:52:22 -07:00 |
|
tokenization
|
[Bugfix] Fix parameter name in get_tokenizer (#4107)
|
2024-04-25 19:10:48 -07:00 |
|
worker
|
[Core][Distributed] use cpu group to broadcast metadata in cpu (#4444)
|
2024-04-29 13:52:22 -07:00 |
|
__init__.py
|
[Small] Formatter only checks lints in changed files (#1528)
|
2023-10-31 15:39:38 -07:00 |
|
conftest.py
|
[Bug fix][Core] assert num_new_tokens == 1 fails when SamplingParams.n is not 1 and max_tokens is large & Add tests for preemption (#4451)
|
2024-05-01 19:24:13 -07:00 |
|
test_cache_block_hashing.py
|
[CI] Try introducing isort. (#3495)
|
2024-03-25 07:59:47 -07:00 |
|
test_config.py
|
[Core] Refactor model loading code (#4097)
|
2024-04-16 11:34:39 -07:00 |
|
test_logger.py
|
[MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273)
|
2024-05-01 17:34:40 -07:00 |
|
test_logits_processor.py
|
[Core] Refactoring sampler and support prompt logprob for chunked prefill (#4309)
|
2024-04-26 13:02:02 +00:00 |
|
test_regression.py
|
[BugFix] Fix GC bug for LLM class (#2882)
|
2024-02-14 22:17:44 -08:00 |
|
test_sampling_params.py
|
[Bugfix] fix crash if max_tokens=None (#2570)
|
2024-01-23 22:38:55 -08:00 |
|
test_sequence.py
|
[Chunked Prefill][4/n] Chunked prefill scheduler. (#3853)
|
2024-04-05 10:17:58 -07:00 |