2023-04-15 11:19:34 +08:00
# Installation
## Installing from source
Installing from source requires the latest CUDA Toolkit that matches the major.minor of CUDA Python installed.
Prior to installing the CUTLASS Python interface, one may optionally set the following environment variables:
* `CUTLASS_PATH` : the path to the cloned CUTLASS repository
* `CUDA_INSTALL_PATH` : the path to the installation of CUDA
If these environment variables are not set, the installation process will infer them to be the following:
2023-11-02 23:09:05 +08:00
* `CUTLASS_PATH` : either one directory level above the current directory (i.e., `$(pwd)/..` ) if installed locally or in the `source` directory of the location in which `cutlass_library` was installed
2023-04-15 11:19:34 +08:00
* `CUDA_INSTALL_PATH` : the directory holding `/bin/nvcc` for the first version of `nvcc` on `$PATH` (i.e., `which nvcc | awk -F'/bin/nvcc' '{print $1}'` )
**NOTE:** The version of `cuda-python` installed must match the CUDA version in `CUDA_INSTALL_PATH` .
### Installing a developer-mode package
2023-11-02 23:09:05 +08:00
The CUTLASS Python interface can currently be installed by navigating to the root of the CUTLASS directory and performing
2023-04-15 11:19:34 +08:00
```bash
2023-11-02 23:09:05 +08:00
pip install .
2023-04-15 11:19:34 +08:00
```
2023-11-02 23:09:05 +08:00
If you would like to be able to make changes to CULASS Python interface and have them reflected when using the interface, perform:
```bash
pip install -e .
```
2023-04-15 11:19:34 +08:00
## Docker
2023-11-02 23:09:05 +08:00
We recommend using the CUTLASS Python interface via an [NGC PyTorch Docker container ](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch ):
2023-04-15 11:19:34 +08:00
```bash
2023-11-02 23:09:05 +08:00
docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:23.08-py3
2023-04-15 11:19:34 +08:00
```