
CUTLASS 2.0 Substantially refactored for - Better performance, particularly for native Turing Tensor Cores - Robust and durable templates spanning the design space - Encapsulated functionality embodying modern C++11 programming techniques - Optimized containers and data types for efficient, generic, portable device code Updates to: - Quick start guide - Documentation - Utilities - CUTLASS Profiler Native Turing Tensor Cores - Efficient GEMM kernels targeting Turing Tensor Cores - Mixed-precision floating point, 8-bit integer, 4-bit integer, and binarized operands Coverage of existing CUTLASS functionality: - GEMM kernels targeting CUDA and Tensor Cores in NVIDIA GPUs - Volta Tensor Cores through native mma.sync and through WMMA API - Optimizations such as parallel reductions, threadblock rasterization, and intra-threadblock reductions - Batched GEMM operations - Complex-valued GEMMs Note: this commit and all that follow require a host compiler supporting C++11 or greater.
8 lines
631 B
JavaScript
8 lines
631 B
JavaScript
var searchData=
|
|
[
|
|
['_7eallocation',['~allocation',['../structcutlass_1_1device__memory_1_1allocation.html#af205dd59859566d6fab5ac3eea8de7bf',1,'cutlass::device_memory::allocation']]],
|
|
['_7ehosttensor',['~HostTensor',['../classcutlass_1_1HostTensor.html#a068d76dabce39c48b617ee7fe8d7edb8',1,'cutlass::HostTensor']]],
|
|
['_7eoperation',['~Operation',['../classcutlass_1_1library_1_1Operation.html#a45fb566b6e6eb3a91f731188446d48f3',1,'cutlass::library::Operation']]],
|
|
['_7eunique_5fptr',['~unique_ptr',['../classcutlass_1_1platform_1_1unique__ptr.html#a8902399dac4ab64f08f909f2ad9d4bcf',1,'cutlass::platform::unique_ptr']]]
|
|
];
|