* Split apart gemm reference templates into multiple TUs for parallel compilation
* remove old files
* better balancing of ref kernels across TUs
* remove 3 new added refcheck kernels and some un-necessary fp8 library instances to reduce lib size
* remove auto fp8 kernels
* remove some redundant kernels
* Correct typos in comments
Correct comments in code on type of generated distribution. Improve Gaussian RNG to take advantage of Box Muller method
* Inline Box Muller
Added inline function for the Box Muller algorithm and updated code comments to be more concise
* Update tensor_fill.h
* Update tensor_fill.h
* small changes to pass tests
Co-authored-by: Haicheng Wu <haichengw@nvidia.com>
* Remove redundant <fstream> includes
* Fix fstream in examples/
* Fix <fstream> in test/
* Use consistent order for <fstream> (always after <iostream>)
* Remove an unneeded include in a file where std::ofstream usage is commented out
Co-authored-by: Ivan Komarov <dfyz@yandex-team.ru>
`CUDA_PERROR_EXIT ` can lead to incorrect usage (see e.g. [this description](https://www.cs.technion.ac.il/users/yechiel/c++-faq/macros-with-if.html)) because it contains an incomplete `if` expression. Consider:
```
if (condition)
CUDA_PERROR_EXIT(cudaFree(x))
else
free(x);
```
The author of the code forgot to add a semicolon after the macro. In that case, the `else` will bind to the `if` inside the macro definition, leading to code that the author did not intend or expect. It the author does use a semicolon, the code will not compile, which is awkward.
The change adds a `do while` around the `if`, which always requires a semicolon.
This PR also adds the text of the failing expression to the printed error message.
* Support parallel split K mode for porfiling
Signed-off-by: Peter Han <fujun.han@iluvatar.ai>
* Parallel Split K support
1. find gemm kernel by preference key
2. switch m n for redution kernel
Signed-off-by: Peter Han <fujun.han@iluvatar.ai>
* parallel splitk for fp16 gemm
* add one missing file
Co-authored-by: Haicheng Wu <haichengw@nvidia.com>
CUTLASS 2.7
Mainloop fusion for GEMM: summation over A or B
Strided DGRAD (optimized iterators)
Half-precision GELU_taylor activation functions
Use these when accumulation and epilogue compute types are all cutlass::half_t
Tuning and bug fixes to fused GEMM + GEMM example
Support for smaller than 128b aligned Convolutions: see examples
Caching of results to accelerate Convolution unit tests
Can be enabled or disabled by running cmake .. -DCUTLASS_TEST_ENABLE_CACHED_RESULTS=OFF
Corrections and bug fixes reported by the CUTLASS community
Thank you for filing these issues!
authored-by: Haicheng Wu haichengw@nvidia.com, Manish Gupta manigupta@nvidia.com, Dustyn Blasig dblasig@nvidia.com, Andrew Kerr akerr@nvidia.com