Using DSS Through a Dynamic PV

CCE allows you to specify a StorageClass to automatically create a DSS disk and the corresponding PV. This function is applicable when no underlying storage volume is available.

Prerequisites

Notes and Constraints

  • DSS disks cannot be attached across AZs and cannot be used by multiple workloads, multiple pods of the same workload, or multiple tasks. Data sharing of a shared disk is not supported between nodes in a CCE cluster. If a DSS disk is attached to multiple nodes, I/O conflicts and data cache conflicts may occur. Therefore, select only one pod when creating a Deployment that uses DSS.

  • If an HPA policy is used to expand the capacity of a workload with a DSS disk attached, new pods cannot be started because the DSS disk cannot attach to them.

Automatically Creating a DSS Volume on the Console

  1. Log in to the CCE console and click the cluster name to access the cluster console.

  2. Dynamically create a PVC and PV.

    1. Choose Storage in the navigation pane. In the right pane, click the PVCs tab. Click Create PVC in the upper right corner. In the dialog box displayed, configure PVC parameters.

      Parameter

      Description

      PVC Type

      In this example, select DSS.

      PVC Name

      Enter the PVC name, which must be unique in a namespace.

      Creation Method

      • If no underlying storage is available, select Dynamically provision to create a PVC, PV, and underlying storage on the console in cascading mode.

      • If underlying storage is available, create a PV or use an existing PV to statically create a PVC. For details, see Using DSS Through a Static PV.

      In this example, select Dynamically provision.

      Storage Classes

      The storage class for DSS disks is csi-disk-dss.

      (Optional) Storage Volume Name Prefix

      Available only when the cluster version is v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later, and Everest of v2.4.15 or later is installed in the cluster.

      This parameter specifies the name of the underlying storage that is automatically created. The actual underlying storage name is in the format of "PV name prefix + PVC UID". If this parameter is left blank, the default prefix pvc will be used.

      For example, if the PV name prefix is set to test, the actual underlying storage name is test-{UID}.

      DSS Pool

      Select an existing DSS pool.

      Capacity (GiB)

      Capacity of the requested storage volume.

      Access Mode

      DSS volumes support only ReadWriteOnce, indicating that a storage volume can be mounted to one node in read/write mode.

      Encryption

      Configure whether to encrypt underlying storage. If you select Enabled (key), an encryption key must be configured.

      Resource Tag

      You can add resource tags to classify resources, which is supported only when the Everest version in the cluster is 2.1.39 or later.

      You can create predefined tags on the TMS console. The predefined tags are available to all resources that support tags. You can use predefined tags to improve the tag creation and resource migration efficiency.

      CCE automatically creates system tags CCE-Cluster-ID={Cluster ID}, CCE-Cluster-Name={Cluster name}, and CCE-Namespace={Namespace name}. These tags cannot be modified.

      Note

      After a dynamic PV of the DSS type is created, the resource tags cannot be updated on the CCE console. To update DSS resource tags, go to the DSS console.

    2. Click Create.

      You can choose Storage in the navigation pane and view the created PVC and PV on the PVCs and PVs tab pages, respectively.

  3. Create an application.

    1. Choose Workloads in the navigation pane. In the right pane, click the StatefulSets tab.

    2. Click Create Workload in the upper right corner. On the displayed page, click Data Storage in the Container Settings area and click Add Volume to select PVC.

      Mount and use storage volumes, as shown in Table 1. For details about other parameters, see Workloads.

      Table 1 Mounting a storage volume

      Parameter

      Description

      PVC

      Select an existing DSS volume.

      A DSS volume can be mounted to only one workload.

      Mount Path

      Enter a mount path, for example, /tmp.

      This parameter specifies a container path to which a data volume will be mounted. Do not mount the volume to a system directory such as / or /var/run. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, leading to container startup failures or workload creation failures.

      Important

      NOTICE: If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged.

      Subpath

      Enter the subpath of the storage volume and mount a path in the storage volume to the container. In this way, different folders of the same storage volume can be used in a single pod. tmp, for example, indicates that data in the mount path of the container is stored in the tmp folder of the storage volume. If this parameter is left blank, the root path is used by default.

      Permission

      • Read-only: You can only read the data in the mounted volumes.

      • Read-write: You can modify the data volumes mounted to the path. Newly written data will not be migrated if the container is migrated, which may cause data loss.

      In this example, the disk is mounted to the /data path of the container. The container data generated in this path is stored in the DSS disk.

      Note

      A non-shared DSS disk can be attached to only one workload pod. If there are multiple pods, extra pods cannot start properly. Ensure that the number of workload pods is 1 if a DSS disk is attached.

      If multiple workload pods are needed, create a StatefulSet and dynamically mount a PV to each pod. For details, see Dynamically Mounting a DSS Disk to a StatefulSet.

    3. After the configuration, click Create Workload.

      After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence.

Automatically Creating a DSS Volume Through kubectl

  1. Use kubectl to access the cluster.

  2. Use StorageClass to dynamically create a PVC and PV.

    1. Create the pvc-dss-auto.yaml file.

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pvc-dss-auto
        namespace: default
        annotations:
          everest.io/disk-volume-type: SAS    # Disk type
          everest.io/csi.dedicated-storage-id: <dss_id>     # ID of the DSS storage pool
          everest.io/crypt-key-id: <your_key_id>    # (Optional) Encryption key ID. Mandatory for an encrypted disk.
      
          everest.io/disk-volume-tags: '{"key1":"value1","key2":"value2"}' # (Optional) Custom resource tags
          everest.io/csi.volume-name-prefix: test  # (Optional) PV name prefix of the automatically created underlying storage
        labels:
          failure-domain.beta.kubernetes.io/region: <your_region>   # Region of the node where the application is to be deployed
          failure-domain.beta.kubernetes.io/zone: <your_zone>       # AZ of the node where the application is to be deployed
      spec:
        accessModes:
        - ReadWriteOnce               # The value must be ReadWriteOnce for DSS.
        resources:
          requests:
            storage: 10Gi             # Disk capacity, ranging from 1 to 32768
        storageClassName: csi-disk-dss    # StorageClass is DSS.
      
      Table 2 Key parameters

      Parameter

      Mandatory

      Description

      failure-domain.beta.kubernetes.io/region

      Yes

      Region where the cluster is located.

      failure-domain.beta.kubernetes.io/zone

      Yes

      AZ where the disk is created. It must be the same as the AZ planned for the workload.

      everest.io/disk-volume-type

      Yes

      Disk type. All letters are in uppercase.

      • SAS: high I/O

      • SSD: ultra-high I/O

      everest.io/csi.dedicated-storage-id

      Yes

      ID of the DSS storage pool where the DSS disk resides.

      To obtain a DSS storage pool ID, log in to the Cloud Server Console. In the navigation pane, choose Dedicated Distributed Storage Service > Storage Pools and click the name of the target storage pool. On the resource pool details page, copy the pool ID.

      everest.io/crypt-key-id

      No

      Mandatory when the DSS disk is encrypted. Enter the encryption key ID selected during disk creation.

      To obtain a key ID, log in to the DEW console, locate the key to be encrypted, and copy the key ID.

      everest.io/disk-volume-tags

      No

      This field is optional. It is supported when the Everest version in the cluster is 2.1.39 or later.

      You can add resource tags to classify resources.

      You can create predefined tags on the TMS console. The predefined tags are available to all resources that support tags. You can use predefined tags to improve the tag creation and resource migration efficiency.

      CCE automatically creates system tags CCE-Cluster-ID={Cluster ID}, CCE-Cluster-Name={Cluster name}, and CCE-Namespace={Namespace name}. These tags cannot be modified.

      everest.io/csi.volume-name-prefix

      No

      (Optional) This parameter is available only when the cluster version is v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later, and Everest of v2.4.15 or later is installed in the cluster.

      This parameter specifies the name of the underlying storage that is automatically created. The actual underlying storage name is in the format of "PV name prefix + PVC UID". If this parameter is left blank, the default prefix pvc will be used.

      Enter 1 to 26 characters that cannot start or end with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.

      For example, if the PV name prefix is set to test, the actual underlying storage name is test-{UID}.

      storage

      Yes

      Requested PVC capacity, in Gi. The value ranges from 1 to 32768.

      storageClassName

      Yes

      The storage class for DSS disks is csi-disk-dss.

    2. Run the following command to create a PVC:

      kubectl apply -f pvc-dss-auto.yaml
      
  3. Create an application.

    1. Create a file named web-dss-auto.yaml. In this example, the disk is mounted to the /data path.

      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: web-dss-auto
        namespace: default
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: web-dss-auto
        serviceName: web-dss-auto   # Headless Service name
        template:
          metadata:
            labels:
              app: web-dss-auto
          spec:
            containers:
            - name: container-1
              image: nginx:latest
              volumeMounts:
              - name: pvc-disk-dss    # Volume name, which must be the same as the volume name in the volumes field
                mountPath: /data  # Location where the storage volume is mounted
            imagePullSecrets:
              - name: default-secret
            volumes:
              - name: pvc-disk-dss    # Volume name, which can be customized
                persistentVolumeClaim:
                  claimName: pvc-dss-auto    # Name of the created PVC
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: web-dss-auto   # Headless Service name
        namespace: default
        labels:
          app: web-dss-auto
      spec:
        selector:
          app: web-dss-auto
        clusterIP: None
        ports:
          - name: web-dss-auto
            targetPort: 80
            nodePort: 0
            port: 80
            protocol: TCP
        type: ClusterIP
      
    2. Run the following command to create a workload to which the DSS volume is mounted:

      kubectl apply -f web-dss-auto.yaml
      

      After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence.

Verifying Data Persistence

  1. View the deployed application and DSS volume files.

    1. Run the following command to view the created pod:

      kubectl get pod | grep web-dss-auto
      

      Expected output:

      web-dss-auto-0                  1/1     Running   0               38s
      
    2. Run the following command to check whether the DSS volume has been mounted to the /data path:

      kubectl exec web-dss-auto-0 -- df | grep data
      

      Expected output:

      /dev/sdc              10255636     36888  10202364   0% /data
      
    3. Run the following command to check the files in the /data path:

      kubectl exec web-dss-auto-0 -- ls /data
      

      Expected output:

      lost+found
      
  2. Run the following command to create a file named static in the /data path:

    kubectl exec web-dss-auto-0 --  touch /data/static
    
  3. Run the following command to check the files in the /data path:

    kubectl exec web-dss-auto-0 -- ls /data
    

    Expected output:

    lost+found
    static
    
  4. Run the following command to delete the pod named web-dss-auto-0:

    kubectl delete pod web-dss-auto-0
    

    Expected output:

    pod "web-dss-auto-0" deleted
    
  5. After the deletion, the StatefulSet controller automatically creates a replica with the same name. Run the following command to check whether the files in the /data path have been modified:

    kubectl exec web-dss-auto-0 -- ls /data
    

    Expected output:

    lost+found
    static
    

    The static file is retained, indicating that the data in the DSS volume can be stored persistently.