| .. |
|
attention
|
[CI/Build] Suppress divide-by-zero and missing return statement warnings (#7001)
|
2024-08-05 16:00:01 -04:00 |
|
core
|
[Kernel][Misc] dynamo support for ScalarType (#7594)
|
2024-08-16 13:59:49 -07:00 |
|
cpu
|
[Misc] Disambiguate quantized types via a new ScalarType (#6396)
|
2024-08-02 13:51:58 -07:00 |
|
cutlass_extensions
|
[Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174)
|
2024-08-20 07:09:33 -06:00 |
|
moe
|
[Kernel] Expand MoE weight loading + Add Fused Marlin MoE Kernel (#7527)
|
2024-08-21 16:17:10 -07:00 |
|
prepare_inputs
|
[Core] draft_model_runner: Implement prepare_inputs on GPU for advance_step (#6338)
|
2024-07-17 14:30:28 -07:00 |
|
quantization
|
[Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174)
|
2024-08-20 07:09:33 -06:00 |
|
activation_kernels.cu
|
[Model] Port over CLIPVisionModel for VLMs (#5591)
|
2024-06-20 11:52:09 +00:00 |
|
cache_kernels.cu
|
Add fp8 support to reshape_and_cache_flash (#6667)
|
2024-07-24 18:36:52 +00:00 |
|
cache.h
|
Add fp8 support to reshape_and_cache_flash (#6667)
|
2024-07-24 18:36:52 +00:00 |
|
cuda_compat.h
|
[Kernel][ROCm][AMD] enable fused topk_softmax kernel for moe layer (#4927)
|
2024-06-02 14:13:26 -07:00 |
|
cuda_utils_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
|
cuda_utils.h
|
[Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174)
|
2024-08-20 07:09:33 -06:00 |
|
custom_all_reduce_test.cu
|
[CI/Build] Enforce style for C++ and CUDA code with clang-format (#4722)
|
2024-05-22 07:18:41 +00:00 |
|
custom_all_reduce.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
|
custom_all_reduce.cuh
|
[CI/Build] Enforce style for C++ and CUDA code with clang-format (#4722)
|
2024-05-22 07:18:41 +00:00 |
|
dispatch_utils.h
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
|
layernorm_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
|
moe_align_block_size_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
|
ops.h
|
[Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174)
|
2024-08-20 07:09:33 -06:00 |
|
pos_encoding_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
|
reduction_utils.cuh
|
[Kernel] Dynamic Per-Token Activation Quantization (#5037)
|
2024-06-07 09:36:26 -07:00 |
|
torch_bindings.cpp
|
[Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174)
|
2024-08-20 07:09:33 -06:00 |