cutlass/tools/library/src/reference
Manish Gupta 7d8317a63e
Support for Mixed Input TensorOp (#1084)
* Passing warp-level mixed input F16*(S8/U8) tests

* passing device-level mixed input F16*(S8/U8) tests

* add to profiler - I8 (111 TFLOPs), U (123 TFLOPs)

* fast numeric conversions (I8 = 132 TFLOPs, U8 = 148 TFLOPs)

* Speedup reference compilation (REVERT THIS COMMIT)

* wider_add.u32_packed_sub.f16x2 (I8 = 132TFLOP/s, U8 = 170 TFLOP/s)

* Improve s8->f16 cvt and support bf16*u8 @158 TFLOPs

* BF16 * S8 (142 TFLOPs)

* Handle mixed-input upcast on OperandA (Support [S8|U8]*[F16|BF16]

* rename OpMultiplyAddMixedInput to OpMultiplyAddMixedInputUpcast

* Add device-level test and profiler support for upcast on operand A

* Move shfl before the cvt and reduce #shfls by 1/2

* fix smem_usage calculation for mixed_input types

* uncomment the stuff (getting ready for merge)

* profiler changes and mixed-input reference

* mixed input reference are in a new file

* use platform instead of std

* comments and typo only

* Use CreateGemmOperator and delete CreateMixedInputGemmOperator

* copyright for new files

* rebase follow-up
2023-09-27 11:18:30 -04:00
..
conv2d.cu New updates for 2.11 (#775) 2023-01-20 16:32:57 -05:00
conv3d.cu New updates for 2.11 (#775) 2023-01-20 16:32:57 -05:00
conv_reference_operation.h CUTLASS 3.1 (#915) 2023-04-14 23:19:34 -04:00
gemm_e4m3a_e4m3out.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_e4m3a_e5m2out.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_e5m2a_e4m3out.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_e5m2a_e5m2out.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_fp8in_bf16out.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_fp8in_fp16out.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_fp8in_fp32out.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_fp32out.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_fp_mixed_input.cu Support for Mixed Input TensorOp (#1084) 2023-09-27 11:18:30 -04:00
gemm_fp_other.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_int4.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_int8_canonical.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_int8_interleaved_32.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_int8_interleaved_64.cu Shard gemm reference templates into multiple TUs for parallel compilation (#1043) 2023-08-30 16:46:30 -04:00
gemm_reference_operation.h CUTLASS 3.1 (#915) 2023-04-14 23:19:34 -04:00
initialize_reference_operations.cu Support for Mixed Input TensorOp (#1084) 2023-09-27 11:18:30 -04:00