Rapids

https://docs.rapids.ai/ 

https://rapids.ai/ 

Run remotely

https://colab.research.google.com/drive/13sspqiEZwso4NYTbsflpPyNFaVAAxUgr (pip rapids install)

https://colab.research.google.com/drive/1TAAi_szMfWqRfHVfjGSqnGVLr_ztzUM9 (conda rapids install)

https://studiolab.sagemaker.aws/  with setup as:

$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh

$ bash Miniconda3-latest-Linux-x86_64.sh

$ conda create --solver=libmamba -n rapids-24.02 -c rapidsai -c conda-forge -c nvidia rapids=24.02 python=3.10 cuda-version=12.0 --y

Installation

Rapids local install

Conda

https://rapids.ai/start.html 

conda config --show channel_priority

conda config --set channel_priority flexible

latest stable

conda create --solver=libmamba -n rapids-24.02 -c rapidsai -c conda-forge -c nvidia rapids=24.02 python=3.10 cuda-version=12.0 [--no-channel-priority]


conda create -n rapids-24.02 python=3.10

conda install --solver=libmamba -c rapidsai -c conda-forge -c nvidia rapids=24.02 python=3.10 cuda-version=12.0 [--no-channel-priority]

 latest stable with other libraries

conda create --solver=libmamba -n rapids-24.02-ext -c rapidsai -c conda-forge -c nvidia rapids=24.02 python=3.10 cuda-version=12.0 jupyterlab dask-sql dash graphistry xarray-spatial s3fs xarray zarr nx-cugraph --no-channel-priority --yes

mamba create -n rapids-24.02-ext -c rapidsai -c conda-forge -c nvidia rapids=24.02 python=3.10 cuda-version=12.0 jupyterlab dask-sql dash graphistry xarray-spatial s3fs xarray zarr nx-cugraph --no-channel-priority --yes

cuda11.8 (e.g. DLAMI)

mamba create -n rapids-24.02 -c rapidsai -c conda-forge -c nvidia  rapids=24.02 python=3.10 cuda-version=11.8


conda create -n rapids python=3.8

conda activate rapids 

conda install -c rapidsai-nightly -c nvidia -c conda-forge -c defaults rapids=0.15 python=3.8 cudatoolkit=10.2

Pip

Rapids cloud install

Google colab

!pip install uv

!uv pip install --system --extra-index-url=https://pypi.nvidia.com \

    cudf-cu12==24.2.* dask-cudf-cu12==24.2.* cuml-cu12==24.2.* \

    cugraph-cu12==24.2.* cuspatial-cu12==24.2.* cuproj-cu12==24.2.* \

    cuxfilter-cu12==24.2.* cucim-cu12==24.2.* pylibraft-cu12==24.2.* \

    raft-dask-cu12==24.2.*

https://colab.research.google.com/drive/17ErB0szXa0mn1aGGJQPmInqgCi0Pi7EU?usp=sharing 

Kaggle

!find /opt/conda \( -name "cudf*" -o -name "libcudf*" -o -name "cuml*" -o -name "libcuml*" \

                   -o -name "cugraph*" -o -name "libcugraph*" -o -name "raft*" -o -name "libraft*" \

                   -o -name "pylibraft*" -o -name "libkvikio*" -o -name "*dask*" -o -name "rmm*"\

                   -o -name "librmm*" \) -exec rm -rf {} \; 2>/dev/null

!pip uninstall cudf cuml dask-cudf cuml cugraph cupy cupy-cuda12x --y

!pip install --extra-index-url=https://pypi.nvidia.com \

    cudf-cu12==24.2.* dask-cudf-cu12==24.2.* cuml-cu12==24.2.* \

    cugraph-cu12==24.2.* cuspatial-cu12==24.2.* cuproj-cu12==24.2.* \

    cuxfilter-cu12==24.2.* cucim-cu12==24.2.* pylibraft-cu12==24.2.* \

    raft-dask-cu12==24.2.*

https://www.kaggle.com/code/premsagar/rapids-cudf-pandas-on-kaggle 

EC2

https://docs.rapids.ai/deployment/stable/cloud/aws/ec2/ 

For AMI seach for nvidia in AWS Marketplace AMIs and choose "NVIDIA GPU-Optimized AMI" and instance "g5.2xlarge"



Cloud Multiple Multi-GPU

from dask_kubernetes import KubeCluster

cluster = KubeCluster.from_yaml(spec.yaml)

helm install rapids ...


Utils

See CUDA version

nvidia-smi

See GPUs

from cudf._cuda.gpu import getDeviceCount

print(getDeviceCount())