CCE AI Suite (NVIDIA GPU)

Introduction

NVIDIA GPU is a device management add-on that supports GPUs in containers. To use GPU nodes in a cluster, this add-on must be installed.

Notes and Constraints

  • The driver to be downloaded must be a .run file.

  • Only NVIDIA Tesla drivers are supported, not GRID drivers.

  • When installing or reinstalling the add-on, ensure that the driver download address is correct and accessible. CCE does not verify the address validity.

  • The gpu-beta add-on only enables you to download the driver and execute the installation script. The add-on status only indicates that how the add-on is running, not whether the driver is successfully installed.

  • CCE does not guarantee the compatibility between the GPU driver version and the CUDA library version of your application. You need to check the compatibility by yourself.

  • If a custom OS image has had a a GPU driver installed, CCE cannot ensure that the GPU driver is compatible with other GPU components such as the monitoring components used in CCE.

Installing the Add-on

  1. Log in to the CCE console and click the cluster name to access the cluster console. Choose Add-ons in the navigation pane, locate CCE AI Suite (NVIDIA GPU) on the right, and click Install.

  2. Configure the add-on parameters.

    • NVIDIA Driver: Enter the link for downloading the NVIDIA driver. All GPU nodes in the cluster will use this driver.

      Important

      • If the download link is a public network address, for example, https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run, bind an EIP to each GPU node. For details about how to obtain the driver link, see Obtaining the Driver Link from Public Network.

      • If the download link is an OBS URL, you do not need to bind an EIP to GPU nodes. For details about how to obtain the driver link, see Obtaining the Driver Link from OBS.

      • Ensure that the NVIDIA driver version matches the GPU node.

      • After the driver version is changed, restart the node for the change to take effect.

    • Driver Selection: If you do not want all GPU nodes in a cluster to use the same driver, CCE allows you to install a different GPU driver for each node pool.

      Note

      • The add-on installs the driver with the version specified by the node pool. The driver takes effect only for new pool nodes.

      • After the driver version is updated, it takes effect on the nodes newly added to the node pool. Existing nodes must restart to apply the changes.

  3. Click Install.

    Note

    If the add-on is uninstalled, GPU pods newly scheduled to the nodes cannot run properly, but GPU pods already running on the nodes will not be affected.

Verifying the Add-on

After the add-on is installed, run the nvidia-smi command on the GPU node and the container that schedules GPU resources to verify the availability of the GPU device and driver.

  • GPU node:

    # If the add-on version is earlier than 2.0.0, run the following command:
    cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi
    
    # If the add-on version is 2.0.0 or later and the driver installation path is changed, run the following command:
    cd /usr/local/nvidia/bin && ./nvidia-smi
    
  • Container:

    cd /usr/local/nvidia/bin && ./nvidia-smi
    

If GPU information is returned, the device is available and the add-on has been installed.

image1

Components

Table 1 Add-on components

Component

Description

Resource Type

nvidia-driver-installer

Used for installing an NVIDIA driver on GPU nodes.

DaemonSet

Change History

Table 2 Release history

Add-on Version

Supported Cluster Version

New Feature

2.6.4

v1.28

v1.29

Updated the isolation logic of GPU cards.

2.6.1

v1.28

v1.29

Upgraded the base images of the add-on.

2.5.6

v1.28

Fixed an issue that occurred during the installation of the driver.

2.5.4

v1.28

Clusters 1.28 are supported.

2.0.69

v1.21

v1.23

v1.25

v1.27

Upgraded the base images of the add-on.

2.0.48

v1.21

v1.23

v1.25

v1.27

Fixed an issue that occurred during the installation of the driver.

2.0.46

v1.21

v1.23

v1.25

v1.27

  • Supported Nvidia driver 535.

  • Non-root users can use xGPUs.

  • Optimized startup logic.

1.2.28

v1.19

v1.21

v1.23

v1.25

  • Optimized the automatic mounting of the GPU driver directory.

1.2.20

v1.19

v1.21

v1.23

v1.25

Set the add-on alias to gpu.

1.2.15

v1.15

v1.17

v1.19

v1.21

v1.23

CCE clusters 1.23 are supported.

1.2.9

v1.15

v1.17

v1.19

v1.21

CCE clusters 1.21 are supported.