CUDA gpu

Install the latest CUDA toolkit

PyTorch

channels

  pytorch (yes it is a channel)

  conda-forge

  

install 

  pytorch  (use both pytorch and conda-forge channels)

  cuda100  (channel is pytorch)

  cudatoolkit

NOTE: Conda ships it's own torch and cuda packages which sometimes dont work well. e.g. on RTX2080 I had some weird "RuntimeError: CuDNN error: CUDNN_STATUS_EXECUTION_FAILED"

The community suggests to use "pip install https://download.pytorch.org/whl/cu100/torch-1.0.0-cp37-cp37m-linux_x86_64.whl" with the compatible torch version for your graphic card.

  

from python cuda should be available

  $ torch.cuda.is_available()

  $ gpu = torch.device("cuda:0")

  $ cpu = torch.device("cpu")

  $ torch.cuda.get_device_name(gpu)  #check if the graphic card is shown correctly

     Out[4]: 'GeForce RTX 2080'

  $ var = torch.randn(2,2)

  $ var = var.to(gpu)  #note the to operation is not in-place, it returns another var

  $ model.to(gpu)  # howeverif it's a nn.Module, e.g. a model, the to operation moves the model (all the tensors within it) into gpu. no need to do model = model.to(cpu)

  $ print(var)     # this should show the device as cuda:0

  $ var.is_cuda    # this should return true

  $ var.to(cpu).detach().numpy()  #this moves the var back to cpu and detach the tensor and convert to numpy

Xgboost

  The pre-built binary wheel file has CUDA support already.

   Download the wheel file from PyPI

#   * xgboost-{version}-py2.py3-none-manylinux1_x86_64.whl

#   * xgboost-{version}-py2.py3-none-win_amd64.whl

   PIP install xgboost-{version}-py2.py3-none-win_amd64.whl

  In xgboost script, add parameter tree_method to use GPU.

    tree_method = 'gpu_hist'

   gpu_id = 0