404
+ +Page not found
+ + +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..a8b77e41 --- /dev/null +++ b/404.html @@ -0,0 +1,217 @@ + + +
+ + + + +Page not found
+ + +The HPE Alletra 5000/6000 and Nimble Storage Container Storage Provider ("CSP") for Kubernetes is the reference implementation for the HPE CSI Driver for Kubernetes. The CSP abstracts the data management capabilities of the array for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass
and VolumeSnapshotClass
parameters but also contains important array setup requirements.
Important
+For a successful deployment, it's important to understand the array platform requirements found within the CSI driver (compute node OS and Kubernetes versions) and the CSP.
+Seealso
+There's a brief introduction on how to use HPE Nimble Storage with the HPE CSI Driver in the Video Gallery. It also applies broadly to HPE Alletra 5000/6000.
+Always check the corresponding CSI driver version in compatibility and support for the required array Operating System ("OS") version for a particular release of the driver. If a certain feature is gated against a certain version of the array OS it will be called out where applicable.
+Tip
+The documentation reflected here always corresponds to the latest supported version and may contain references to future features and capabilities.
+How to deploy an HPE storage array is beyond the scope of this document. Please refer to HPE InfoSight for further reading.
+Important
+The HPE Nimble Storage Linux Toolkit (NLT) is not compatible with the HPE CSI Driver for Kubernetes. Do not install NLT on Kubernetes compute nodes. It may be installed on Kubernetes control plane nodes if they use iSCSI or FC storage from the array.
+The CSP requires access to a user with either poweruser
or the administrator
role. It's recommended to use the poweruser
role for least privilege practices.
Tip
+It's highly recommended to deploy a multitenant setup.
+In array OS 6.0.0 and newer it's possible to create separate tenants using the tenantadmin
CLI to assign folders to a tenant. This creates a secure and logical separation of storage resources between Kubernetes clusters.
No special configuration is needed on the Kubernetes cluster when using a tenant account or a regular user account. It's important to understand from a provisioning perspective that if the tenant account being used has been assigned multiple folders, the CSP will pick the folder with the most space available. If this is not desirable and a 1:1 StorageClass
to Folder mapping is needed, the "folder" parameter needs to be called out in the StorageClass
.
For reference, as of array OS 6.0.0, this is the tenantadmin
command synopsis.
$ tenantadmin --help
+Usage: tenantadmin [options]
+Manage Tenants.
+
+Available options are:
+ --help Program help.
+
+ --list List Tenants.
+
+ --info name Tenant info.
+
+ --add tenant_name Add a tenant.
+ --folders folders List of folder paths (comma separated
+ pool_name:fqn) the tenant will be able to
+ access (mandatory).
+
+ --remove name Remove a tenant.
+
+ --add_folder tenant_name Add a folder path for tenant access.
+ --name folder_name Name of the folder path (pool_name:fqn) to
+ be added (mandatory).
+
+ --remove_folder tenant_name Remove a folder path from tenant access.
+ --name folder_name Name of the folder path (pool_name:fqn) to
+ be removed (mandatory).
+
+ --passwd Change tenant's login password.
+ --tenant name Change a specific tenant's login password
+ (mandatory).
+
+Caution
+The tenantadmin
command may only be run by local array OS administrators. LDAP or Active Directory accounts, regardless of role, are not supported.
tenantadmin
CLI.Some features may be limited and restricted in a multitenant deployment, such as arbitrarily import volumes in folders from the array the tenant isn't a user of, here are a few less obvious limitations.
+Seealso
+An in-depth tutorial on how to use multitenancy and the tenantadmin
CLI is available on HPE Developer: Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble Storage. There's also a high level overview of multitenancy available as a lightboard presentation on YouTube.
Consult the compatibility and support table for supported array OS versions. CSI and CSP specific limitations are listed below.
+group --edit --iscsi_enabled yes
on the Array OS CLI.A StorageClass
is used to provision or clone a persistent volume. It can also be used to import an existing volume or clone a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.
Backward compatibility with the HPE Nimble Storage FlexVolume driver is being honored to a certain degree. StorageClass
API objects needs be rewritten and parameters need to be updated regardless.
Please see using the HPE CSI Driver for base StorageClass
examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities.
Note
+These are optional parameters unless specified.
+These parameters are mutable between a parent volume and creating a clone from a snapshot.
+Parameter | +String | +Description | +
---|---|---|
accessProtocol1 | +Text | +The access protocol to use when accessing the persistent volume ("fc" or "iscsi"). Defaults to "iscsi" when unspecified. | +
destroyOnDelete | +Boolean | +Indicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to "false" which means volumes needs to be pruned manually. | +
limitIops | +Integer | +The IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default). | +
limitMbps | +Integer | +The MB/s throughput limit for the volume between 1 and 4294967294, or -1 for unlimited (default). | +
description | +Text | +Text to be added to the volume's description on the array. Empty string by default. | +
performancePolicy2 | +Text | +The name of the performance policy to assign to the volume. Default example performance policies include "Backup Repository", "Exchange 2003 data store", "Exchange 2007 data store", "Exchange 2010 data store", "Exchange log", "Oracle OLTP", "Other Workloads", "SharePoint", "SQL Server", "SQL Server 2012", "SQL Server Logs". Defaults to the "default" performance policy. | +
protectionTemplate4 | +Text | +The name of the protection template to assign to the volume. Default examples of protection templates include "Retain-30Daily", "Retain-48Hourly-30Daily-52Weekly", and "Retain-90Daily". | +
folder | +Text | +The name of the folder in which to place the volume. Defaults to the root of the "default" pool. | +
thick | +Boolean | +Indicates that the volume should be thick provisioned. Defaults to "false" | +
dedupeEnabled3 | +Boolean | +Indicates that the volume should enable deduplication. Defaults to "true" when available. | +
syncOnDetach | +Boolean | +Indicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node. Defaults to "false". | +
+ Restrictions applicable when using the CSI volume mutator:
+
1 = Parameter is immutable and can't be altered after provisioning/cloning.
+
2 = Performance policies may only be mutated between performance polices with the same block size.
+
3 = Deduplication may only be mutated within the same performance policy application category and block size.
+
4 = This parameter was removed in HPE CSI Driver 1.4.0 and replaced with VolumeGroupClasses
.
+
Note
+Performance Policies, Folders and Protection Templates are array OS specific constructs that can be created on the array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight.
+These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes.
+Parameter | +String | +Description | +
---|---|---|
encrypted | +Boolean | +Indicates that the volume should be encrypted. Defaults to "false". | +
pool | +Text | +The name of the pool in which to place the volume. Defaults to the "default" pool. | +
Cloning supports two modes of cloning. Either use cloneOf
and reference a PVC in the current namespace or use importVolAsClone
and reference an array volume name to clone and import to Kubernetes.
Parameter | +String | +Description | +
---|---|---|
cloneOf | +Text | +The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. |
+
importVolAsClone | +Text | +The name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. |
+
snapshot | +Text | +The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. | +
createSnapshot | +Boolean | +Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. |
+
Importing volumes to Kubernetes requires the source array volume to be offline. In case of reverse replication, the upstream volume should be in offline state. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE CSI Driver.
+Parameter | +String | +Description | +
---|---|---|
importVolumeName | +Text | +The name of the array volume to import. | +
snapshot | +Text | +The name of the array snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored. | +
takeover | +Boolean | +Indicates the current group will takeover ownership of the array volume and volume collection. This should be performed against a downstream replica. | +
reverseReplication | +Boolean | +Reverses the replication direction so that writes to the array volume are replicated back to the group where it was replicated from. | +
forceImport | +Boolean | +Forces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead. | +
Seealso
+In this HPE Developer blog post you'll learn how to use the import parameters to lift and transform applications from traditional infrastructure to Kubernetes using the HPE CSI Driver.
+These parameters are applicable only for Pod inline volumes and to be specified within Pod spec.
+Parameter | +String | +Description | +
---|---|---|
csi.storage.k8s.io/ephemeral | +Boolean | +Indicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to "true". | +
inline-volume-secret-name | +Text | +A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. | +
inline-volume-secret-namespace | +Text | +The namespace of inline-volume-secret-name for ephemeral inline volume. |
+
size | +Text | +The size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used. | +
accessProtocol | +Text | +Storage access protocol to use, "iscsi" or "fc". | +
Important
+All parameters are required for inline ephemeral volumes.
+If basic data protection is required and performed on the array, VolumeGroups
needs to be created, even it's just a single volume that needs data protection using snapshots and replication. Learn more about VolumeGroups
in the provisioning concepts documentation.
Parameter | +String | +Description | +
---|---|---|
description | +Text | +Text to be added to the volume collection description on the array. Empty by default. | +
protectionTemplate | +Text | +The name of the protection template to assign to the volume collection. Default examples of protection templates include "Retain-30Daily", "Retain-48Hourly-30Daily-52Weekly", and "Retain-90Daily". Empty by default, meaning no array snapshots are performed on the VolumeGroups . |
+
New feature
+VolumeGroupClasses
were introduced with version 1.4.0 of the CSI driver. Learn more in the Using section.
These parametes are for VolumeSnapshotClass
objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information.
How to use VolumeSnapshotClass
and VolumeSnapshot
objects is elaborated on in using CSI snapshots.
Parameter | +String | +Description | +
---|---|---|
description | +Text | +Text to be added to the snapshot's description on the array. | +
writable | +Boolean | +Indicates if the snapshot is writable on the array. Defaults to "false". | +
online | +Boolean | +Indicates if the snapshot is set to online on the array. Defaults to "false". | +
Static provisioning of PVs
and PVCs
may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass
.
Create a PV
referencing an existing 10GiB volume on the array, replace .spec.csi.volumeHandle
with the array volume ID.
Warning
+If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem.
+
apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: my-static-pv-1
+spec:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 10Gi
+ csi:
+ volumeHandle: <insert volume ID here>
+ driver: csi.hpe.com
+ fsType: xfs
+ volumeAttributes:
+ volumeAccessMode: mount
+ fsType: xfs
+ controllerPublishSecretRef:
+ name: hpe-backend
+ namespace: hpe-storage
+ nodePublishSecretRef:
+ name: hpe-backend
+ namespace: hpe-storage
+ controllerExpandSecretRef:
+ name: hpe-backend
+ namespace: hpe-storage
+ persistentVolumeReclaimPolicy: Retain
+ volumeMode: Filesystem
+
+Tip
+Remove .spec.csi.controllerExpandSecretRef
to disallow volume expansion.
Now, a user may claim the static PV
by creating a PVC
referencing the PV
name in .spec.volumeName
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ volumeName: my-static-pv-1
+ storageClassName: ""
+
+
+ The HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) for Kubernetes is part of the HPE CSI Driver for Kubernetes. The CSP abstract the data management capabilities of the array for use by Kubernetes.
+Note
+The HPE CSI Driver for Kubernetes is only compatible with HPE Alletra Storage MP running with block services, such as HPE GreenLake for Block Storage.
+Note
+For help getting started with deploying the HPE CSI Driver using HPE Alletra Storage MP, Alletra 9000, Primera or 3PAR storage, check out the tutorial over at HPE Developer.
+Check the corresponding CSI driver version in the compatibility and support table for the latest updates on supported Kubernetes version, orchestrators and host OS.
+ + +The HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Container Storage Provider requires the following TCP ports to be open inbound to the array from the Kubernetes cluster worker nodes running the HPE CSI Driver for Kubernetes.
+Port | +Protocol | +Description | +
---|---|---|
443 | +HTTPS | +WSAPI (HPE Alletra Storage MP, Alletra 9000/Primera) | +
8080 | +HTTPS | +WSAPI (HPE 3PAR) | +
22 | +SSH | +Array communication | +
The CSP requires access to a local user with either edit
or the super
role. It's recommended to use the edit
role for security best practices.
Note
+LDAP users are not supported by the CSP.
+Virtual Domains are not yet fully supported by the CSP. From HPE CSI Driver v2.5.0, it's possible to manually create the Kubernetes hosts connecting to storage within the Virtual Domain. Once the hosts have been created, deploy the CSI driver with the Helm chart using the "disableHostDeletion" parameter set to "true". The Virtual Domain user may create the hosts through the Virtual Domain if the "AllowDomainUsersAffectNoDomain" parameter is set to either "hostonly" or "yes" on the array.
+Note
+Remote Copy Groups managed by the CSP have not been tested with Virtual Domains at this time.
+A VLUN template enables the export of a virtual volume as a VLUN to hosts. For more information, see the HPE Primera OS Commmand Line Interface - Installation and Reference Guide.
+The CSP supports the following types of VLUN templates:
+Template | +Description | +
---|---|
Matched set | +The default VLUN template. The VLUN is visible to initiators with the host's WWNs only on the specified port(s). | +
Host sees | +The VLUN is visible to the initiators with any of the host's WWNs. | +
The boolean string "hostSeesVLUN" StorageClass
parameter controls which VLUN template to use.
Recommendation
+In most scenarios, "hostSeesVLUN" should be set to "true".
+To modify an existing PVC
, "hostSeesVLUN" needs to be specified with the "allowMutations" parameter along with adding the PVC
annotation "csi.hpe.com/hostSeesVLUN" with the string values of either "true" or "false". The HPE CSI Driver creates the VLUN template based upon the hostSeesVLUN
parameter during the volume publish operation. For the change to take effect, the Pod
will need to be scheduled on another node by either deleting the Pod
or draining the node.
All parameters enumerated reflects the current version and may contain unannounced features and capabilities.
+Parameter | +Option | +Description | +
---|---|---|
accessProtocol (Required) | +fc or iscsi | +The access protocol to use when attaching the persistent volume. | +
cpg 1 | +Text | +The name of existing CPG to be used for volume provisioning. If the cpg parameter is not specified, the CSP will select a CPG available to the array. |
+
snapCpg 1 | +Text | +The name of the snapshot CPG to be used for volume provisioning. Needs to be set if any kind of VolumeSnapshots or PVC cloning parameters are used. |
+
compression 1 | +Boolean | +Indicates that the volume should be compressed. (3PAR only) | +
provisioningType 1 | +tpvv | +Default. Indicates Thin provisioned volume type. | +
+ | full 3 | +Indicates Full provisioned volume type. | +
+ | dedup 3 | +Indicates Thin Deduplication volume type. | +
+ | reduce 4 | +Indicates Data Reduction volume type. | +
hostSeesVLUN | +Boolean | +Enable "host sees" VLUN template. | +
importVolumeName | +Text | +Name of the volume to import. | +
importVolAsClone | +Text | +Name of the volume to clone and import. | +
cloneOf 2 | +Text | +Name of the PersistentVolumeClaim to clone. |
+
virtualCopyOf 2 | +Text | +Name of the PersistentVolumeClaim to snapshot. |
+
qosName | +Text | +Name of the volume set which has QoS rules applied. | +
remoteCopyGroup 1 | +Text | +Name of a new or existing Remote Copy group on the array. | +
replicationDevices | +Text | +Indicates name of custom resource of type hpereplicationdeviceinfos . |
+
allowBatchReplicatedVolumeCreation | +Boolean | +Enable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group. During this process, the Remote Copy group is stopped and started once. |
+
oneRcgPerPvc | +Boolean | +Creates a dedicated Remote Copy group per persistent volume. | +
iscsiPortalIps | +Text | +Comma separated list of the array iSCSI port IPs. | +
fcPortsList | +Text | +Comma separated list of available FC ports. Example: "0:5:1,1:4:2,2:4:1,3:4:2" Default: Use all available ports. | +
+ Restrictions applicable when using the CSI volume mutator:
+
1 = Parameters that are editable after provisioning.
+
2 = Volumes with snapshots/clones can't be modified.
+
3 = HPE 3PAR only parameter
+
4 = HPE Primera/Alletra 9000 only parameter
+
Please see using the HPE CSI Driver for additional StorageClass
examples like CSI snapshots and clones.
Important
+The HPE CSI Driver allows the PersistentVolumeClaim
to override the StorageClass
parameters by annotating the PersistentVolumeClaim
. Please see Using PVC Overrides for more details.
Cloning supports two modes of cloning. Either use cloneOf
and reference a PersistentVolumeClaim
in the current namespace to clone or use importVolAsClone
and reference an array volume name to clone and import into the Kubernetes cluster. Volumes with clones are immutable once created.
Parameter | +Option | +Description | +
---|---|---|
cloneOf | +Text | +The name of the PersistentVolumeClaim to be cloned. cloneOf and importVolAsClone are mutually exclusive. |
+
importVolAsClone | +Text | +The name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. |
+
accessProtocol | +fc or iscsi | +The access protocol to use when attaching the cloned volume. | +
Important
+• No other parameters are required in the StorageClass
while cloning outside of those parameters listed in the table above.
+• Cloning using above parameters is independent of snapshot CRD
availability on Kubernetes and it can be performed on any supported Kubernetes version.
+• Support for importVolAsClone
and cloneOf
is available from HPE CSI Driver 1.3.0+.
During the snapshotting process, any existing PersistentVolumeClaim
defined in the virtualCopyOf
parameter within a StorageClass
, will be snapped as PersistentVolumeClaim
and exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Volumes with snapshots are immutable once created.
Parameter | +Option | +Description | +
---|---|---|
accessProtocol | +fc or iscsi | +The access protocol to use when attaching the snapshot volume. | +
virtualCopyOf | +Text | +The name of existing PersistentVolumeClaim to be snapped |
+
Important
+• No other parameters are required in the StorageClass
when snapshotting a volume outside of those parameters listed in the table above.
+• Snapshotting using virtualCopyOf
is independent of snapshot CRD
availability on Kubernetes and it can be performed on any supported Kubernetes version.
+• Support for virtualCopyOf
is available from HPE CSI Driver 1.3.0+.
During the import volume process, any legacy (non-container volumes) defined in the ImportVol parameter, within a StorageClass
, will be renamed to match the PersistentVolumeClaim
that leverages the StorageClass
. The new volumes will be exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Note: All previous Access Control Records and Initiator Groups will be removed from the volume when it is imported.
Parameter | +Option | +Description | +
---|---|---|
accessProtocol | +fc or iscsi | +The access protocol to use when importing the volume. | +
importVolumeName | +Text | +The name of the array volume to import. | +
Important
+• No other parameters are required in the StorageClass
when importing a volume outside of those parameters listed in the table above.
+• Support for importVolumeName
is available from HPE CSI Driver 1.2.0+.
To enable replication within the HPE CSI Driver, the following steps must be completed:
+Secrets
for both primary and target arrays. Refer to Configuring Additional Storage Backends.StorageClass
.For a tutorial on how to enable replication, check out the blog Enabling Remote Copy using the HPE CSI Driver for Kubernetes on HPE Primera
+A Custom Resource Definition (CRD) of type hpereplicationdeviceinfos.storage.hpe.com
must be created to define the target array information. The CRD object name will be used to define the StorageClass
parameter replicationDevices. CRD mandatory parameters: targetCpg
, targetName
, targetSecret
and targetSecretNamespace
.
apiVersion: storage.hpe.com/v2
+kind: HPEReplicationDeviceInfo
+metadata:
+ name: r1
+spec:
+ target_array_details:
+ - targetCpg: <cpg_name>
+ targetSnapCpg: <snapcpg_name> #optional.
+ targetName: <target_array_name>
+ targetSecret: <target_secret_name>
+ targetSecretNamespace: hpe-storage
+
apiVersion: storage.hpe.com/v1
+kind: HPEReplicationDeviceInfo
+metadata:
+ name: r1
+spec:
+ target_array_details:
+ - targetCpg: <cpg_name>
+ targetSnapCpg: <snapcpg_name> #optional.
+ targetName: <target_array_name>
+ targetSecret: <target_secret_name>
+ targetSecretNamespace: hpe-storage
+
Important
+The HPE CSI Driver only supports Remote Copy Peer Persistence mode.
+These parameters are applicable only for replication. Both parameters are mandatory. If the Remote Copy volume group (RCG) name, as defined within the StorageClass
, does not exist on the array, then a new RCG will be created.
Parameter | +Option | +Description | +
---|---|---|
remoteCopyGroup | +Text | +Name of new or existing Remote Copy group 1 on the array. | +
replicationDevices | +Text | +Indicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD). |
+
allowBatchReplicatedVolumeCreation | +Boolean | +Enable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group. (Optional) During this process, the Remote Copy group is stopped and started once. |
+
oneRcgPerPvc | +Boolean | +Creates a dedicated Remote Copy group per persistent volume. (Optional) | +
+ Remote Copy additional details:
+
1 = Existing RCG must have CPG and Copy CPG configured.
+
Link to HPE Primera OS: Configuring data replication using Remote Copy
+
Important
+Remote Copy groups (RCG) created by the HPE CSI driver 2.1 and later have the Auto synchronize and Auto recover policies applied.
To add or remove these policies from RCGs, modify the existing RCG using the SSMC or CLI with the following command:
Add
setrcopygroup pol auto_recover,auto_synchronize <group_name>
Remove
setrcopygroup pol no_auto_recover,no_auto_synchronize <group_name>
To add a non-replicated volume to an existing Remote Copy group, allowMutations: description
at minimum must be defined within the StorageClass
. Refer to Remote Copy with Peer Persistence Replication for more details.
Edit the non-replicated PVC and annotate the following parameters:
+Parameter | +Option | +Description | +
---|---|---|
remoteCopyGroup | +Text | +Name of existing Remote Copy group. | +
oneRcgPerPvc | +Boolean | +Creates a dedicated Remote Copy group per persistent volume. (Optional) | +
replicationDevices | +Text | +Indicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD). |
+
Note
+remoteCopyGroup
and oneRcgPerPvc
parameters are mutually exclusive and cannot be added together when editing a PVC
.
These parameters are for VolumeSnapshotClass
objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable.
How to use VolumeSnapshotClass
and VolumeSnapshot
objects is elaborated on in using CSI snapshots.
Parameter | +String | +Description | +
---|---|---|
read_only | +Boolean | +Indicates if the snapshot is writable on the array. | +
In the HPE CSI Driver version 1.4.0+, a volume set with QoS settings can be created dynamically using the QoS parameters for the VolumeGroupClass
. The following parameters are available for a VolumeGroup
on the array. Learn more about VolumeGroups
in the provisioning concepts documentation.
Parameter | +String | +Description | +
---|---|---|
description | +Text | +An identifier to describe the VolumeGroupClass . Example: "My VolumeGroupClass" |
+
priority | +Text | +The priority level for the target volume set. Example: "low", "normal", "high" | +
ioMinGoal | +Text | +IOPS minimum goal for the target volume set. Example: "300" | +
ioMaxLimit | +Text | +IOPS maximum limit for the target volume set. Example: "10000" | +
bwMinGoalKb | +Text | +Bandwidth minimum goal in kilobytes per second for the target volume set. Example: "300" | +
bwMaxLimitKb | +Text | +Bandwidth maximum limit in kilobytes per second for the target volume set. Example: "30000" | +
latencyGoal | +Text | +Latency goal in milliseconds (ms) or microseconds(us) for the target volume set. Example: "300ms" or "500us" | +
domain | +Text | +The array Virtual Domain, with which the volume group and related objects are associated with. Example: "sample_domain" | +
Important
+All QoS parameters are mandatory when creating a VolumeGroupClass
on the array.
Example:
+
apiVersion: storage.hpe.com/v1
+kind: VolumeGroupClass
+metadata:
+ name: my-volume-group-class
+provisioner: csi.hpe.com
+deletionPolicy: Delete
+parameters:
+ description: "HPE CSI Driver for Kubernetes Volume Group"
+ csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend
+ csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage
+ priority: normal
+ ioMinGoal: "300"
+ ioMaxLimit: "10000"
+ bwMinGoalKb: "3000"
+ bwMaxLimitKb: "30000"
+ latencyGoal: "300ms"
+
+These parameters are for SnapshotGroupClass
objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable.
How to use VolumeSnapshotClass
and VolumeSnapshot
objects is elaborated on in using CSI snapshots.
Parameter | +String | +Description | +
---|---|---|
read_only | +Boolean | +Indicates if the snapshot is writable on the array. | +
Static provisioning of PVs
and PVCs
may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass
.
The CSP expects a certain naming convention for PersistentVolumes
and Virtual Volumes on the array.
pvc-00000000-0000-0000-0000-000000000000
pvc-00000000-0000-0000-0000-000
Note
+The zeroes are used as examples. They can be replaced with any hexadecimal from 0
to f
. Establishing a scheme may be important if static provisioning is going to be the main method of providing persistent storage to workloads.
The following example uses the above scheme as a naming convention. Have a storage administrator rename the existing Virtual Volume on the array:
+
setvv -name pvc-00000000-0000-0000-0000-000 my-existing-virtual-volume
+
+Create a new HPEVolumeInfo
resource.
apiVersion: storage.hpe.com/v2
+kind: HPEVolumeInfo
+metadata:
+ name: pvc-00000000-0000-0000-0000-000000000000
+spec:
+ record:
+ Id: pvc-00000000-0000-0000-0000-000000000000
+ Name: pvc-00000000-0000-0000-0000-000
+ uuid: pvc-00000000-0000-0000-0000-000000000000
+
+Create a PV
referencing the HPEVolumeInfo
resource.
Warning
+If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem.
+
apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pvc-00000000-0000-0000-0000-000000000000
+spec:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 16Gi
+ csi:
+ volumeHandle: pvc-00000000-0000-0000-0000-000000000000
+ driver: csi.hpe.com
+ fsType: xfs
+ volumeAttributes:
+ volumeAccessMode: mount
+ fsType: xfs
+ controllerPublishSecretRef:
+ name: hpe-backend
+ namespace: hpe-storage
+ nodePublishSecretRef:
+ name: hpe-backend
+ namespace: hpe-storage
+ controllerExpandSecretRef:
+ name: hpe-backend
+ namespace: hpe-storage
+ persistentVolumeReclaimPolicy: Retain
+ volumeMode: Filesystem
+
+Tip
+Remove .spec.csi.controllerExpandSecretRef
to disallow volume expansion.
Now, a user may claim the static PV
by creating a PVC
referencing the PV
name in .spec.volumeName
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 16Gi
+ volumeName: my-static-pv-1
+ storageClassName: ""
+
+Please refer to the HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage CSP support statement.
+ +Expired content
+The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.
+The HPE Cloud Volumes CSP integrates seamlessly with the HPE Cloud Volumes Block service in the public cloud. The CSP abstracts the data management capabilities of the storage service for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass
and VolumeSnapshotClass
parameters but also contains important HPE Cloud Volumes Block configuration details.
Important
+The HPE Cloud Volumes CSP is currently in beta and available as a Tech Preview on Amazon EKS only. Please see the 1.5.0-beta Helm chart.
+Seealso
+There's a Tech Preview available in the Video Gallery on how to get started with the HPE Cloud Volumes CSP with the HPE CSI Driver.
+Always check the corresponding CSI driver version in compatibility and support for basic requirements (such as supported Kubernetes version and cloud instance OS). If a certain feature is gated against any particular cloud provider it will be called out where applicable.
+Hyperscaler | +Managed Kubernetes | +BYO Kubernetes | +Status | +
---|---|---|---|
Amazon Web Services | +Elastic Kubernetes Service (EKS) | +N/A | +Tech Preview | +
Microsoft Azure | +Azure Kubernetes Service (AKS) | +TBA | +TBA | +
Google Cloud | +Google Kubernetes Engine (GKE) | +TBA | +TBA | +
Additional hyperscaler support and BYO capabilities may become available in a future release of the CSP.
+Kubernetes compute nodes will need to have access to the cloud provider's metadata services. This varies by cloud provider and is taken care of automatically by the HPE Cloud Volume CSP. The provided values may be overridden in the StorageClass
, see common parameters for more information.
The HPE Cloud Volumes CSP may be deployed in the regions where the managed Kubernetes service control planes intersect with the HPE Cloud Volumes Block service.
+Region | +EKS | +Azure | +|
---|---|---|---|
Americas | +us-east-1, us-west-2 | +TBA | +TBA | +
Europe | +eu-west-1, eu-west-2 | +TBA | +TBA | +
Asia Pacific | +ap-northest-1 | +TBA | +TBA | +
Consider this table a snapshot of a particular moment in time and consult with the respective hyperscalers and the HPE Cloud Volumes Block service for definitive availability.
+Note
+In other regions where HPE Cloud Volumes provide services, such as us-west-1, but cloud providers has no managed Kubernetes service; BYO Kubernetes is the only available option when it becomes available as a supported feature of the CSP.
+Consult the compatibility and support table for generic limitations and requirements. CSI and CSP specific limitations with HPE Cloud Volumes Block is listed below.
+description
is ignored by the CSP.StorageClass
and in conjunction with Ephemeral Inline Volumes. Your "regionID" may only be found in the APIs. Join us on Slack if you're hitting this issue (it can be seen in the CSP logs).Tip
+While not a limitation, iSCSI CHAP is mandatory with HPE Cloud Volumes but does not need to be configured within the CSI driver. The CHAP credentials are queried through the REST APIs from the HPE Cloud Volumes account session and applied automatically during runtime.
+A StorageClass
is used to provision or clone an HPE Cloud Volumes Block-backed persistent volume. It can also be used to import an existing Cloud Volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.
Please see using the HPE CSI Driver for base StorageClass
examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities.
Note
+All parameters are optional unless documented as mandatory for a particular use case.
+These parameters are mutable between a parent volume and creating a clone from a snapshot.
+Parameter | +String | +Description | +
---|---|---|
destroyOnDelete | +Boolean | +Indicates the backing Cloud Volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to "false" which means volumes needs to be pruned manually in the Cloud Volume service. | +
limitIops | +Integer | +The IOPS limit of the volume. The IOPS limit should be in the range 300 (default) to 20000. | +
performancePolicy1 | +Text | +The name of the performance policy to assign to the volume. Available performance policies: "Exchange", "Oracle", "SharePoint", "SQL", "Windows File Server". Defaults to "Other Workloads". | +
schedule | +Text | +Snapshot schedule to assign to the volumes. Available schedules: "hourly", "daily", "twicedaily", "weekly", "monthly", "none". Defaults to "daily". | +
retentionPolicy | +Integer | +Retention policy to assign to the schedule . The parameter must be paired properly with the schedule .
retentionPolicy . |
+
privateCloud1 | +Text | +Override the compute instance provided VPC/VNET. | +
existingCloudSubnet1 | +Text | +Override the compute instance provided subnet. | +
automatedConnection1 | +Boolean | +Override the HPE Cloud Volumes configured setting for connection automation. Connections between HPE Cloud Volumes and the desired VPC/VNET needs to be provisioned manually if set to "false". | +
+ Restrictions applicable when using the CSI volume mutator:
+
1 = Parameter is immutable and can't be altered after provisioning/cloning.
+
These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes.
+Parameter | +String | +Description | +
---|---|---|
volumeType | +Text | +Volume type, General Purpose Flash ("GPF") or Premium Flash ("PF"). Defaults to "PF" | +
These parameters are applicable only for Pod inline volumes and to be specified within Pod spec.
+Parameter | +String | +Description | +
---|---|---|
csi.storage.k8s.io/ephemeral | +Boolean | +Indicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to "true". | +
inline-volume-secret-name | +Text | +A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. | +
inline-volume-secret-namespace | +Text | +The namespace of inline-volume-secret-name for ephemeral inline volume. |
+
size | +Text | +The size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used. | +
Important
+All parameters are required for inline ephemeral volumes.
+Cloning supports two modes of cloning. Either use cloneOf
and reference a PVC in the current namespace or use importVolAsClone
and reference a Cloud Volume name to clone and import to Kubernetes.
Parameter | +String | +Description | +
---|---|---|
cloneOf | +Text | +The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. |
+
importVolAsClone | +Text | +The name of the Cloud Volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. |
+
snapshot | +Text | +The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. | +
createSnapshot | +Boolean | +Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. |
+
replStore | +Text | +Name of the Cloud Volume Replication Store to look for volumes, defaults to look outside of Replication Stores | +
Importing volumes to Kubernetes requires the source Cloud Volume to be disconnected.
+Parameter | +String | +Description | +
---|---|---|
importVolumeName | +Text | +The name of the Cloud Volume to import. | +
forceImport | +Boolean | +Allows import of volumes created on a different Kubernetes cluster other than the one importing the volume to. | +
These parametes are for VolumeSnapshotClass
objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information.
How to use VolumeSnapshotClass
and VolumeSnapshot
objects is elaborated on in using CSI snapshots.
Parameter | +String | +Description | +
---|---|---|
description | +Text | +Text to be added to the snapshot's description in the Cloud Volume service (optional) | +
HPE supports up to three minor releases. These release are kept here for historic purposes.
+Release highlights:
+Upgrade considerations:
+Kubernetes | +1.23-1.261 | +
---|---|
Helm Chart | +v2.3.0 on ArtifactHub | +
Operators | +
+ v2.3.0 on OperatorHub + v2.3.0 via OpenShift console + |
+
Worker OS | +
+ RHEL2 7.x, 8.x, 9.x, RHCOS 4.10-4.12 + Ubuntu 16.04, 18.04, 20.04, 22.04 + SLES 15 SP2, SP3, SP4 + |
Platforms3 | +
+ Alletra OS 5000/6000 6.0.0.x - 6.1.1.x + Alletra OS 9000 9.3.x - 9.5.x + Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.1.x + Primera OS 4.3.x - 4.5.x + 3PAR OS 3.3.x + |
+
Data protocol | +Fibre Channel, iSCSI | +
Release notes | +v2.3.0 on GitHub | +
Blogs | ++ Support and security updates for HPE CSI Driver for Kubernetes (release blog) + | +
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
+ 3 = Learn about each data platform's team support commitment.
+
Release highlights:
+Upgrade considerations:
+Kubernetes | +1.21-1.241 | +
---|---|
Helm Chart | +v2.2.0 on ArtifactHub | +
Operators | +
+ v2.2.1 on OperatorHub + v2.2.1 via OpenShift console + |
+
Worker OS | +
+ RHEL2 7.x & 8.x, RHCOS 4.8 & 4.10 + Ubuntu 16.04, 18.04 & 20.04 + SLES 15 SP2 + |
Platforms | +
+ Alletra OS 6000 6.0.0.x - 6.1.0.x + Alletra OS 9000 9.3.x - 9.5.x + Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.0.x + Primera OS 4.3.x - 4.5.x + 3PAR OS 3.3.x + |
+
Data protocol | +Fibre Channel, iSCSI | +
Release notes | +v2.2.0 on GitHub | +
Blogs | ++ Updates and Improvements to HPE CSI Driver for Kubernetes (release blog) + | +
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
+
Release highlights:
+Kubernetes | +1.20-1.231 | +
---|---|
Worker OS | +CentOS and RHEL 7.x & 8.x, RHCOS 4.6 & 4.8, Ubuntu 18.04 & 20.04, SLES 15 SP2 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | +
+ Alletra OS 6000 6.0.0.x + Alletra OS 9000 9.4.x + Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x + Primera OS 4.3.x, 4.4.x + 3PAR OS 3.3.2 + |
+
Release notes | +v2.1.1 on GitHub | +
+ 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. +
+Release highlights:
+Kubernetes | +1.20-1.221 | +
---|---|
Worker OS | +CentOS and RHEL 7.x & 8.x, RHCOS 4.6 & 4.8, Ubuntu 18.04 & 20.04, SLES 15 SP2 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | +
+ Alletra OS 6000 6.0.0.x + Alletra OS 9000 9.3.x, 9.4.x + Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x + Primera OS 4.0.x, 4.1.x, 4.2.x, 4.3.x, 4.4.x + 3PAR OS 3.3.1, 3.3.2 + |
+
Release notes | +v2.1.0 on GitHub | +
Blogs | +
+ HPE CSI Driver enhancements with monitoring and alerting (release blog) + Get started with Prometheus and Grafana and HPE Storage Array Exporter (tutorial) + |
+
+ 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. +
+Release highlights:
+Kubernetes | +1.18-1.211 | +
---|---|
Worker OS | +CentOS and RHEL 7.x & 8.x, RHCOS 4.6, Ubuntu 18.04 & 20.04, SLES 15 SP2 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | +
+ Alletra OS 6000 6.0.0.x + Alletra OS 9000 9.3.0 + Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x + Primera OS 4.0.x, 4.1.x, 4.2.x, 4.3.x + 3PAR OS 3.3.1, 3.3.2 + |
+
Release notes | +v2.0.0 on GitHub | +
Blogs | +
+ HPE CSI Driver for Kubernetes now available for HPE Alletra (release blog) + Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble (tutorial) + Host-based Volume Encryption with HPE CSI Driver for Kubernetes (tutorial) + |
+
+ 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. +
+Release highlights:
+Kubernetes | +1.17-1.201 | +
---|---|
Worker OS | +CentOS and RHEL 7.7 & 8.1, RHCOS 4.4 & 4.6, Ubuntu 18.04 & 20.04, SLES 15 SP1 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | +
+ NimbleOS 5.0.10.0-x, 5.1.4.200-x, 5.2.1.0-x, 5.3.0.0-x, 5.3.1.0-x + 3PAR OS 3.3.1+ + Primera OS 4.0+ + |
+
Release notes | +v1.4.0 on GitHub | +
Blogs | +
+ HPE CSI Driver for Kubernetes v1.4.0 now available! (release blog) + Synchronized Volume Snapshots for Distributed Workloads on Kubernetes (tutorial) + |
+
+ 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. +
+Release highlights:
+Kubernetes | +1.15-1.181 | +
---|---|
Worker OS | +CentOS 7.6, RHEL 7.6, RHCOS 4.3-4.4, Ubuntu 18.04, Ubuntu 20.04 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | +
+ NimbleOS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x + 3PAR OS 3.3.1 + Primera OS 4.0.0, 4.1.0, 4.2.02 + |
+
Release notes | +v1.3.0 on GitHub | +
Blogs | +
+ Around The Storage Block (release) + HPE DEV (Remote copy peer persistence tutorial) + HPE DEV (Introducing the volume mutator) + |
+
+ 1 = For HPE Ezmeral Container Platform and Rancher; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations.
+ 2 = Only FC is supported on Primera OS prior to 4.2.0.
+
Release highlights: Support for raw block volumes and inline ephemeral volumes. NFS Server Provisioner in Tech Preview (beta).
+Kubernetes | +1.14-1.18 | +
---|---|
Worker OS | +CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | +
+ NimbleOS 5.0.10.x, 5.1.3.1000-x, 5.1.4.200-x, 5.2.1.x + 3PAR OS 3.3.1 + Primera OS 4.0.0, 4.1.0 (FC only) + |
+
Release notes | +v1.2.0 on GitHub | +
Blogs | +Around The Storage Block (release) + HPE DEV (tutorial for raw block and inline volumes) + Around The Storage Block (NFS Server Provisioner) + HPE DEV (tutorial for NFS) + |
+
Release highlights: Support for HPE 3PAR and Primera Container Storage Provider.
+Kubernetes | +1.13-1.17 | +
---|---|
Worker OS | +CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | +
+ NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x + 3PAR OS 3.3.1 + Primera OS 4.0.0, 4.1.0 (FC only) + |
+
Release notes | +N/A | +
Blogs | +HPE Storage Tech Insiders (release), HPE DEV (tutorial for "primera3par" CSP) | +
Release highlights: Broader ecosystem support, official support for CSI snapshots and volume resize.
+Kubernetes | +1.13-1.17 | +
---|---|
Worker OS | +CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | ++ NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x + | +
Release notes | +v1.1.0 on GitHub | +
Blogs | +HPE Storage Tech Insiders (release), HPE DEV (snapshots, clones, resize) | +
Release highlights: Initial GA release with support for Dynamic Provisioning.
+Kubernetes | +1.13-1.17 | +
---|---|
Worker OS | +CentOS 7.6, RHEL 7.6, Ubuntu 16.04, Ubuntu 18.04 + |
Data protocol | +Fibre Channel, iSCSI | +
Platforms | +NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x | +
Release notes | +v1.0.0 on GitHub | +
Blogs | +HPE Storage Tech Insiders (release), HPE DEV (architecture and introduction) | +
The HPE CSI Driver is deployed by using industry standard means, either a Helm chart or an Operator. An "advanced install" from object configuration files is provided as reference for partners, OEMs and users wanting to perform customizations and their own packaging or deployment methodologies.
+As different methods of installation are provided, it might not be too obvious which delivery vehicle is the right one.
+ +I have a... | +Then you need... | +
---|---|
Vanilla upstream Kubernetes cluster on a supported host OS. | +The Helm chart | +
Red Hat OpenShift 4.x cluster. | +The certified CSI operator for OpenShift | +
Supported environment with multiple backends. | +Helm chart with additional Secrets and StorageClasses | +
HPE Ezmeral Runtime Enterprise environment. | +The Helm chart | +
Operator Life-cycle Manager (OLM) environment. | +The CSI operator | +
Unsupported host OS/Kubernetes cluster and like to tinker. | +The advanced install | +
Supported platform in an air-gapped environment | +The Helm chart using the air-gapped procedure | +
Undecided?
+If it's not clear what you should use for your environment, the Helm chart is most likely the correct answer.
+Helm is the package manager for Kubernetes. Software is being delivered in a format designated as a "chart". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG
file.
The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on Artifact Hub. The chart only supports Helm 3 from version 1.3.0 of the HPE CSI Driver. In an effort to avoid duplicate documentation, please see the chart for instructions on how to deploy the CSI driver using Helm.
+In the event of deploying the HPE CSI Driver in a secure air-gapped environment, Helm is the recommended method. For sake of completeness, it's also possible to follow the advanced install procedures and replace "quay.io" in the deployment manifests with the internal private registry location.
+Establish a working directory on a bastion Linux host that has HTTP access to the Internet, the private registry and the Kubernetes cluster where the CSI driver needs to be installed. The bastion host is assumed to have the docker
, helm
and curl
command installed. It's also assumed throughout that the user executing docker
has logged in to the private registry and that pulling images from the private registry is allowed anonymously by the Kubernetes compute nodes.
Note
+Only the HPE CSI Driver 1.4.0 and later is supported using this methodology.
+Create a working directory and set environment variables referenced throughout the procedure. In this example, we'll use HPE CSI Driver v2.5.0 on Kubernetes 1.30. Available versions are found in the co-deployments GitHub repo.
+
mkdir hpe-csi-driver
+cd hpe-csi-driver
+export MY_REGISTRY=registry.enterprise.example.com
+export MY_CSI_DRIVER=2.5.0
+export MY_K8S=1.30
+
+Next, create a list with the CSI driver images. Copy and paste the entire text blob in one chunk.
+
curl -s https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/hpe-csi-k8s-${MY_K8S}.yaml \
+ https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/nimble-csp.yaml \
+ https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/3par-primera-csp.yaml \
+| grep image: | awk '{print $2}' | sort | uniq > images
+echo quay.io/hpestorage/nfs-provisioner:v3.0.5 >> images
+
+Important
+In HPE CSI Driver 2.4.2 and earlier the NFS Server Provisioner image is not automatically pulled from the private registry once installed. Use the "nfsProvisionerImage" parameter in the StorageClass
.
The above command should not output anything. A list of images should be in the file "images".
+Pull, tag and push the images to the private registry.
+
cat images | xargs -n 1 docker pull
+awk '{ print $1" "$1 }' images | sed -E -e "s/ quay.io| registry.k8s.io/ ${MY_REGISTRY}/" | xargs -n 2 docker tag
+sed -E -e "s/quay.io|registry.k8s.io/${MY_REGISTRY}/" images | xargs -n 1 docker push
+
+Tip
+Depending on what kind of private registry being used, the base repositories hpestorage
and sig-storage
might need to be created and given write access to the user pushing the images.
Next, install the chart as normal with the additional registry
parameter. This is an example, please refer to the Helm chart documentation on ArtifactHub.
helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/
+kubectl create ns hpe-storage
+
+Version 2.4.2 or earlier.
+
helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage --version ${MY_CSI_DRIVER} --set registry=${MY_REGISTRY}
+
+Version 2.5.0 or newer, skip to → Version 2.5.0 and newer.
+Note
+If the client running helm
is in the air-gapped environment as well, the docs directory needs to be hosted on a web server in the air-gapped environment, and then use helm repo add hpe-storage https://my-web-server.internal/docs
above instead.
In version 2.5.0 and onwards, all images used by the HPE CSI Driver for Kubernetes Helm Chart are parameterized individually with the fully qualified URL.
+Use the procedure above to mirror the images to an internal registry. Once mirrored, replace the registry names in the reference values.yaml
file.
curl -s https://raw.githubusercontent.com/hpe-storage/co-deployments/master/helm/values/csi-driver/v${MY_CSI_DRIVER}/values.yaml | sed -E -e "s/ quay.io| registry.k8s.io/ ${MY_REGISTRY}/g" > my-values.yaml
+
+Use the my-values.yaml
file to install the Helm Chart.
helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver \
+-n hpe-storage --version ${MY_CSI_DRIVER} \
+-f my-values.yaml
+
+The Operator pattern is based on the idea that software should be instantiated and run with a set of custom controllers in Kubernetes. It creates a native experience for any software running on Kubernetes.
+The official HPE CSI Operator for Kubernetes is hosted on OperatorHub.io. The CSI Operator images are hosted both on quay.io and officially certified containers in the Red Hat Ecosystem Catalog.
+The HPE CSI Operator for Kubernetes is a fully certified Operator for OpenShift. There are a few tweaks needed and there's a separate section for OpenShift.
+Follow the documentation from the respective upstream distributions on how to deploy an Operator. In most cases, the Operator Lifecyle Manager (OLM) needs to be installed separately (does NOT apply to OpenShift 4 and later).
+Visit the documentation in the OLM GitHub repo to learn how to install OLM.
+Once OLM is operational, install the HPE CSI Operator.
+
kubectl create -f https://operatorhub.io/install/hpe-csi-operator.yaml
+
+The Operator will be installed in my-hpe-csi-operator
namespace. Watch it come up by inspecting the ClusterServiceVersion
(CSV).
kubectl get csv -n my-hpe-csi-operator
+
+Next, a HPECSIDriver
object needs to be instantiated. Create a file named hpe-csi-operator.yaml
, edit and apply (or copy the command from the top of the content).
# kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+ name: hpecsidriver-sample
+spec:
+ # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+ controller:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ tolerations: []
+ csp:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ tolerations: []
+ disable:
+ alletra6000: false
+ alletra9000: false
+ alletraStorageMP: false
+ nimble: false
+ primera: false
+ disableHostDeletion: false
+ disableNodeConfiguration: false
+ disableNodeConformance: false
+ disableNodeGetVolumeStats: false
+ disableNodeMonitor: false
+ imagePullPolicy: IfNotPresent
+ images:
+ csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
+ csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0
+ csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7
+ csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0
+ csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
+ csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
+ csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
+ csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
+ csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6
+ csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6
+ csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6
+ nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5
+ nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0
+ primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0
+ iscsi:
+ chapSecretName: ""
+ kubeletRootDir: /var/lib/kubelet
+ logLevel: info
+ node:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ tolerations: []
+
# kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+ name: hpecsidriver-sample
+spec:
+ # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+ controller:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ csp:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ disable:
+ alletra6000: false
+ alletra9000: false
+ alletraStorageMP: false
+ nimble: false
+ primera: false
+ disableNodeConfiguration: false
+ disableNodeConformance: false
+ disableNodeGetVolumeStats: false
+ imagePullPolicy: IfNotPresent
+ iscsi:
+ chapPassword: ""
+ chapUser: ""
+ kubeletRootDir: /var/lib/kubelet/
+ logLevel: info
+ node:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ registry: quay.io
+
+
+
# kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+ name: hpecsidriver-sample
+spec:
+ # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+ controller:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ csp:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ disable:
+ alletra6000: false
+ alletra9000: false
+ alletraStorageMP: false
+ nimble: false
+ primera: false
+ disableNodeConfiguration: false
+ disableNodeConformance: false
+ disableNodeGetVolumeStats: false
+ imagePullPolicy: IfNotPresent
+ iscsi:
+ chapPassword: ""
+ chapUser: ""
+ kubeletRootDir: /var/lib/kubelet/
+ logLevel: info
+ node:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ registry: quay.io
+
+
+
Tip
+The contents depends on which version of the CSI driver is installed. Please visit OperatorHub or ArtifactHub for more details.
+The CSI driver is now ready for use. Proceed to the next section to learn about adding an HPE storage backend.
+Once the CSI driver is deployed, two additional objects needs to be created to get started with dynamic provisioning of persistent storage, a Secret
and a StorageClass
.
Tip
+Naming the Secret
and StorageClass
is entirely up to the user, however, to keep up with the examples on SCOD, it's highly recommended to use the names illustrated here.
All parameters are mandatory and described below.
+Parameter | +Description | +
---|---|
serviceName | +This hostname or IP address where the Container Storage Provider (CSP) is running, usually a Kubernetes Service , such as "alletra6000-csp-svc" or "alletra9000-csp-svc" |
+
servicePort | +This is port the serviceName is listening to. |
+
backend | +This is the management hostname or IP address of the actual backend storage system, such as an Alletra 5000/6000 or 9000 array. | +
username | +Backend storage system username with the correct privileges to perform storage management. | +
password | +Backend storage system password. | +
Example:
+apiVersion: v1
+kind: Secret
+metadata:
+ name: hpe-backend
+ namespace: hpe-storage
+stringData:
+ serviceName: alletrastoragemp-csp-svc
+ servicePort: "8080"
+ backend: 10.10.0.20
+ username: 3paradm
+ password: 3pardata
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: hpe-backend
+ namespace: hpe-storage
+stringData:
+ serviceName: alletra6000-csp-svc
+ servicePort: "8080"
+ backend: 192.168.1.110
+ username: admin
+ password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: hpe-backend
+ namespace: hpe-storage
+stringData:
+ serviceName: alletra9000-csp-svc
+ servicePort: "8080"
+ backend: 10.10.0.20
+ username: 3paradm
+ password: 3pardata
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: hpe-backend
+ namespace: hpe-storage
+stringData:
+ serviceName: nimble-csp-svc
+ servicePort: "8080"
+ backend: 192.168.1.2
+ username: admin
+ password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: hpe-backend
+ namespace: hpe-storage
+stringData:
+ serviceName: primera3par-csp-svc
+ servicePort: "8080"
+ backend: 10.10.0.2
+ username: 3paradm
+ password: 3pardata
+
Create the Secret
using kubectl
:
kubectl create -f secret.yaml
+
+Tip
+In a real world scenario it's more practical to name the Secret
something that makes sense for the organization. It could be the hostname of the backend or the role it carries, i.e "hpe-alletra-sanjose-prod".
Next step involves creating a default StorageClass.
+It's not uncommon to have multiple HPE primary storage systems within the same environment, either the same family or different ones. This section walks through the scenario of managing multiple StorageClass
and Secret
API objects to represent an environment with multiple systems.
There's a brief tutorial available in the Video Gallery that walks through these steps.
+Note
+Make note of the Kubernetes Namespace
or OpenShift project name used during the deployment. In the following examples, we will be using the "hpe-storage" Namespace
.
To view the current Secrets
in the "hpe-storage" Namespace
(assuming default names):
kubectl -n hpe-storage get secret/hpe-backend
+NAME TYPE DATA AGE
+hpe-backend Opaque 5 2m
+
+This Secret
is used by the CSI sidecars in the StorageClass
to authenticate to a specific backend for CSI operations. In order to add a new Secret
or manage access to multiple backends, additional Secrets
will need to be created per backend.
Secret Requirements
+Secret
name must be unique.To create a new Secret
, specify the name, Namespace
, backend username, backend password and the backend IP address to be used by the CSP and save it as custom-secret.yaml
(a detailed description of the parameters are available above).
apiVersion: v1
+kind: Secret
+metadata:
+ name: custom-secret
+ namespace: hpe-storage
+stringData:
+ serviceName: alletrastoragemp-csp-svc
+ servicePort: "8080"
+ backend: 10.10.0.20
+ username: 3paradm
+ password: 3pardata
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: custom-secret
+ namespace: hpe-storage
+stringData:
+ serviceName: alletra6000-csp-svc
+ servicePort: "8080"
+ backend: 192.168.1.110
+ username: admin
+ password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: custom-secret
+ namespace: hpe-storage
+stringData:
+ serviceName: alletra9000-csp-svc
+ servicePort: "8080"
+ backend: 10.10.0.20
+ username: 3paradm
+ password: 3pardata
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: custom-secret
+ namespace: hpe-storage
+stringData:
+ serviceName: nimble-csp-svc
+ servicePort: "8080"
+ backend: 192.168.1.2
+ username: admin
+ password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: custom-secret
+ namespace: hpe-storage
+stringData:
+ serviceName: primera3par-csp-svc
+ servicePort: "8080"
+ backend: 10.10.0.2
+ username: 3paradm
+ password: 3pardata
+
Create the Secret
using kubectl
:
kubectl create -f custom-secret.yaml
+
+You should now see the Secret
in the "hpe-storage" Namespace
:
kubectl -n hpe-storage get secret/custom-secret
+NAME TYPE DATA AGE
+custom-secret Opaque 5 1m
+
+To use the new Secret
"custom-secret", create a new StorageClass
using the Secret
and the necessary StorageClass
parameters. Please see the requirements section of the respective CSP.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-custom
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/controller-expand-secret-name: custom-secret
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: custom-secret
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: custom-secret
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: custom-secret
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/provisioner-secret-name: custom-secret
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ description: "Volume created by using a custom Secret with the HPE CSI Driver for Kubernetes"
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-custom
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/resizer-secret-name: custom-secret
+ csi.storage.k8s.io/resizer-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: custom-secret
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: custom-secret
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: custom-secret
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/provisioner-secret-name: custom-secret
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ description: "Volume created by using a custom Secret with the HPE CSI Driver for Kubernetes"
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
Note
+Don't forget to call out the StorageClass
explicitly when creating PVCs
from non-default StorageClasses
.
Next, Create a PersistentVolumeClaim from a StorageClass.
+This guide is primarily written to accommodate a highly manual installation on upstream Kubernetes or partner OEMs engaged with HPE to bundle the HPE CSI Driver in a custom distribution. Installation steps may vary for different vendors and flavors of Kubernetes.
+The following example walks through deployment of the latest CSI driver.
+Critical
+It's highly recommended to use either the Helm chart or Operator to install the HPE CSI Driver for Kubernetes and the associated Container Storage Providers. Only venture down manual installation if your requirements can't be met by the Helm chart or Operator.
+Deploy the CSI driver and sidecars for the relevant Kubernetes version.
+Uninstalling the CSI driver when installed manually
+The manifests below create a number of objects, including CustomResourceDefinitions
(CRDs) which may hold critical information about storage resources. Simply deleting the below manifests in order to uninstall the CSI driver may render PersistentVolumes
unusable.
These object configuration files are common for all versions of Kubernetes.
+All components below are deployed in the "hpe-storage" Namespace
.
kubectl create ns hpe-storage
+
+Worker node IO settings and common CRDs
:
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-linux-config.yaml
+kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-volumegroup-snapshotgroup-crds.yaml
+
+Container Storage Provider:
+kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/nimble-csp.yaml
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-csp.yaml
+kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-crd.yaml
+
Important
+The above instructions assumes you have an array with a supported platform OS installed. Please see the requirements section of the respective CSP.
+Install the CSI driver:
+kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.29.yaml
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.28.yaml
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.27.yaml
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.26.yaml
+
Seealso
+Older and unsupported versions of Kubernetes and the CSI driver are archived on this page.
+Depending on which version is being deployed, different API objects gets created. Next step: Add an HPE Storage Backend.
+The following steps outline how to uninstall the CSI driver that has been deployed using the Advanced Install above.
+Uninstall Worker node settings:
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-linux-config.yaml
+
+Uninstall relevant Container Storage Provider:
+kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/nimble-csp.yaml
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-csp.yaml
+
HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR users
+If you are reinstalling the HPE CSI Driver, DO NOT remove the crd/hpevolumeinfos.storage.hpe.com
resource. This CustomResourceDefinition
contains important volume metadata used by the HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR CSP. HPE CSI Driver v2.0.0 and below share the same YAML file for crds
and CSP and would require a manual removal of the individual Service
and Deployment
in the "hpe-storage" Namespace
.
Uninstall the CSI driver:
+kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.29.yaml
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.28.yaml
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.27.yaml
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.26.yaml
+
If no longer needed, delete the "hpe-storage" Namespace
.
kubectl delete ns hpe-storage
+
+Downgrading the CSI driver is currently not supported. It will work between certain minor versions. HPE does not test or document procedures to downgrade between incompatible versions.
+ +It's recommended to familiarize yourself with inspecting workloads on Kubernetes. This particular cheat sheet is very useful to have readily available.
+Once the CSI driver has been deployed either through object configuration files, Helm or an Operator. This view should be representative of what a healthy system should look like after install. If any of the workload deployments lists anything but Running
, proceed to inspect the logs of the problematic workload.
kubectl get pods --all-namespaces -l 'app in (nimble-csp, hpe-csi-node, hpe-csi-controller)'
+NAMESPACE NAME READY STATUS RESTARTS AGE
+hpe-storage hpe-csi-controller-7d9cd6b855-zzmd9 9/9 Running 0 15s
+hpe-storage hpe-csi-node-dk5t4 2/2 Running 0 15s
+hpe-storage hpe-csi-node-pwq2d 2/2 Running 0 15s
+hpe-storage nimble-csp-546c9c4dd4-5lsdt 1/1 Running 0 15s
+
kubectl get pods --all-namespaces -l 'app in (primera3par-csp, hpe-csi-node, hpe-csi-controller)'
+NAMESPACE NAME READY STATUS RESTARTS AGE
+hpe-storage hpe-csi-controller-7d9cd6b855-fqppd 9/9 Running 0 14s
+hpe-storage hpe-csi-node-86kh6 2/2 Running 0 14s
+hpe-storage hpe-csi-node-k8p4p 2/2 Running 0 14s
+hpe-storage hpe-csi-node-r2mg8 2/2 Running 0 14s
+hpe-storage hpe-csi-node-vwb5r 2/2 Running 0 14s
+hpe-storage primera3par-csp-546c9c4dd4-bcwc6 1/1 Running 0 14s
+
A Custom Resource Definition (CRD) named hpenodeinfos.storage.hpe.com
holds important network and host initiator information.
Retrieve list of nodes.
+
kubectl get hpenodeinfos
+$ kubectl get hpenodeinfos
+NAME AGE
+tme-lnx-worker1 57m
+tme-lnx-worker3 57m
+tme-lnx-worker2 57m
+tme-lnx-worker4 57m
+
+Inspect a node.
+
kubectl get hpenodeinfos/tme-lnx-worker1 -o yaml
+apiVersion: storage.hpe.com/v1
+kind: HPENodeInfo
+metadata:
+ creationTimestamp: "2020-08-24T23:50:09Z"
+ generation: 1
+ managedFields:
+ - apiVersion: storage.hpe.com/v1
+ fieldsType: FieldsV1
+ fieldsV1:
+ f:spec:
+ .: {}
+ f:chap_password: {}
+ f:chap_user: {}
+ f:iqns: {}
+ f:networks: {}
+ f:uuid: {}
+ manager: csi-driver
+ operation: Update
+ time: "2020-08-24T23:50:09Z"
+ name: tme-lnx-worker1
+ resourceVersion: "30337986"
+ selfLink: /apis/storage.hpe.com/v1/hpenodeinfos/tme-lnx-worker1
+ uid: 3984752b-29ac-48de-8ca0-8381532cbf06
+spec:
+ chap_password: RGlkIHlvdSByZWFsbHkgZGVjb2RlIHRoaXM/
+ chap_user: chap-user
+ iqns:
+ - iqn.1994-05.com.redhat:828e7a4eef40
+ networks:
+ - 10.2.2.2/16
+ - 172.16.6.115/24
+ - 172.16.8.115/24
+ - 172.17.0.1/16
+ - 10.1.1.0/12
+ uuid: 0242f811-3995-746d-652d-6c6e78352d77
+
+The NFS Server Provisioner consists of a number of Kubernetes resources per PVC. The default Namespace
where the resources are deployed is "hpe-nfs" but is configurable in the StorageClass
. See base StorageClass
parameters for more details.
Object | +Name | +Purpose | +
---|---|---|
ConfigMap | +hpe-nfs-config | +This ConfigMap holds the configuration file for the NFS server. Local tweaks may be wanted. Please see the config file reference for more details. |
+
Deployment | +hpe-nfs-UID | +The Deployment that is running the NFS Pod . |
+
Service | +hpe-nfs-UID | +The Service the NFS clients perform mounts against. |
+
PVC | +hpe-nfs-UID | +The RWO claim serving the NFS workload. | +
Tip
+The UID stems from the user request RWX PVC
for easy tracking. Use kubectl get pvc/my-pvc -o jsonpath='{.metadata.uid}{"\n"}'
to retrieve it.
When troubleshooting NFS deployments it's common that only the source RWX PVC
and Namespace
is known. The next few steps explains how resources can be easily traced.
Retrieve the "hpe-nfs-UID" from the NFS Pod
by specifying PVC
and Namespace
of the RWX PVC
:
kubectl get pods -l provisioned-by=my-pvc,provisioned-from=my-namespace -A -o jsonpath='{.items[].metadata.labels.app}{"\n"}'
+
+Next, enumerate the resources from the "hpe-nfs-UID":
+
kubectl get pvc,svc,deploy -A -o name --field-selector metadata.name=hpe-nfs-UID
+
+Example output:
+
persistentvolumeclaim/hpe-nfs-UID
+service/hpe-nfs-UID
+deployment.apps/hpe-nfs-UID
+
+If only the PV
name is known, looking from the backend storage perspective, the PV
name (and .spec.claimRef.uid
) contains the UID, for example: "pvc-UID".
Clarification
+The hpe-nfs-UID
is abbreviated, it will contain a real UID added on, for example "hpe-nfs-98ce7c80-13f9-45d0-9609-089227bf97f1".
If there's issues with VolumeSnapshots
not being created when performing SnapshotGroup
snapshots, checking the logs of the "csi-volume-group-provisioner" and "csi-volume-group-snapshotter" in the "hpe-csi-controller" Deployment
.
kubectl logs -n hpe-storage deploy/hpe-csi-controller csi-volume-group-provisioner
+kubectl logs -n hpe-storage deploy/hpe-csi-controller csi-volume-group-snapshotter
+
+Log files associated with the HPE CSI Driver logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution for Kubernetes such as Fluentd. Some of the logs on the host are persisted which follow standard logrotate policies.
+Node driver:
+
kubectl logs -f daemonset.apps/hpe-csi-node hpe-csi-driver -n hpe-storage
+
+Controller driver:
+
kubectl logs -f deployment.apps/hpe-csi-controller hpe-csi-driver -n hpe-storage
+
+Tip
+The logs for both node and controller drivers are persisted at /var/log/hpe-csi.log
Log levels for both CSI Controller and Node driver can be controlled using LOG_LEVEL
environment variable. Possible values are info
, warn
, error
, debug
, and trace
. Apply the changes using kubectl apply -f <yaml>
command after adding this to CSI controller and node container spec as below. For Helm charts this is controlled through logLevel
variable in values.yaml
.
env:
+ - name: LOG_LEVEL
+ value: trace
+
+CSP logs can be accessed from their respective services.
+kubectl logs -f deploy/nimble-csp -n hpe-storage
+
kubectl logs -f deploy/primera3par-csp -n hpe-storage
+
Log collector script hpe-logcollector.sh
can be used to collect the logs from any node which has kubectl
access to the cluster.
curl -O https://raw.githubusercontent.com/hpe-storage/csi-driver/master/hpe-logcollector.sh
+chmod 555 hpe-logcollector.sh
+
+Usage:
+
./hpe-logcollector.sh -h
+Collect HPE storage diagnostic logs using kubectl.
+
+Usage:
+ hpe-logcollector.sh [-h|--help] [--node-name NODE_NAME] \
+ [-n|--namespace NAMESPACE] [-a|--all]
+Options:
+-h|--help Print this usage text
+--node-name NODE_NAME Collect logs only for Kubernetes node
+ NODE_NAME
+-n|--namespace NAMESPACE Collect logs from HPE CSI deployment in namespace
+ NAMESPACE (default: kube-system)
+-a|--all Collect logs from all nodes (the default)
+
+HPE provides a set of well tested defaults for the CSI driver and all the supported CSPs. In certain case it may be necessary to fine tune the CSI driver to accommodate a certain workload or behavior.
+The HPE CSI Driver for Kubernetes automatically configures Linux iSCSI/multipath settings based on config.json. In order to tune these values, edit the config map with kubectl edit configmap hpe-linux-config -n hpe-storage
and restart node plugin using kubectl delete pod -l app=hpe-csi-node
to apply.
Important
+HPE provide a set of general purpose default values for the IO paths, tuning is only required if prescribed by HPE.
+A Container Storage Interface (CSI) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider (CSP) to perform data management operations on storage resources. The architecture of the CSI driver allows block storage vendors to implement a CSP that follows the specification (a browser friendly version).
+The CSI driver architecture allows a complete separation of concerns between upstream Kubernetes core, SIG Storage (CSI owners), CSI driver author (HPE) and the backend CSP developer.
+ +Tip
+The HPE CSI Driver for Kubernetes is vendor agnostic. Any entity may leverage the driver and provide their own Container Storage Provider.
+CSI gradually mature features and capabilities in the specification at the pace of the community. HPE keep a close watch on differentiating features the primary storage family of products may be suitable for implementing in CSI and Kubernetes. HPE experiment early and often. That's why it's sometimes possible to observe a certain feature being available in the CSI driver although it hasn't been announced or isn't documented.
+Below is the official table for CSI features we track and deem readily available for use after we've officially tested and validated it in the platform matrix.
+Feature | +K8s maturity | +Since K8s version | +HPE CSI Driver | +
---|---|---|---|
Dynamic Provisioning | +Stable | +1.13 | +1.0.0 | +
Volume Expansion | +Stable | +1.24 | +1.1.0 | +
Volume Snapshots | +Stable | +1.20 | +1.1.0 | +
PVC Data Source | +Stable | +1.18 | +1.1.0 | +
Raw Block Volume | +Stable | +1.18 | +1.2.0 | +
Inline Ephemeral Volumes | +Beta | +1.16 | +1.2.0 | +
Volume Limits | +Stable | +1.17 | +1.2.0 | +
Volume Mutator1 | +N/A | +1.15 | +1.3.0 | +
Generic Ephemeral Volumes | +GA | +1.23 | +1.3.0 | +
Volume Groups1 | +N/A | +1.17 | +1.4.0 | +
Snapshot Groups1 | +N/A | +1.17 | +1.4.0 | +
NFS Server Provisioner1 | +N/A | +1.17 | +1.4.0 | +
Volume Encryption1 | +N/A | +1.18 | +2.0.0 | +
Basic Topology3 | +Stable | +1.17 | +2.5.0 | +
Advanced Topology3 | +Stable | +1.17 | +Future | +
Storage Capacity Tracking | +Stable | +1.24 | +Future | +
Volume Expansion From Source | +Stable | +1.27 | +Future | +
ReadWriteOncePod | +Stable | +1.29 | +Future | +
Volume Populator | +Beta | +1.24 | +Future | +
Volume Health | +Alpha | +1.21 | +Future | +
Cross Namespace Snapshots | +Alpha | +1.26 | +Future | +
Upstream Volume Group Snapshot | +Alpha | +1.27 | +Future | +
Volume Attribute Classes | +Alpha | +1.29 | +Future | +
+ 1 = HPE CSI Driver for Kubernetes specific CSI sidecar. CSP support may vary.
+ 2 = Alpha features are enabled by Kubernetes feature gates and are not formally supported by HPE.
+ 3 = Topology information can only be used to describe accessibility relationships between a set of nodes and a single backend using a StorageClass
.
+
Depending on the CSP, it may support a number of different snapshotting, cloning and restoring operations by taking advantage of StorageClass
parameter overloading. Please see the respective CSP for additional functionality.
Refer to the official table of feature gates in the Kubernetes docs to find availability of beta and alpha features. HPE provide limited support on non-GA CSI features. Please file any issues, questions or feature requests here. You may also join our Slack community to chat with HPE folks close to this project. We hang out in #Alletra
, #NimbleStorage
, #3par-primera
and #Kubernetes
, sign up at slack.hpedev.io and login at hpedev.slack.com.
Tip
+Familiarize yourself with the basic requirements below for running the CSI driver on your Kubernetes cluster. It's then highly recommended to continue installing the CSI driver with either a Helm chart or an Operator.
+These are the combinations HPE has tested and can provide official support services around for each of the CSI driver releases. Each Container Storage Provider has it's own requirements in terms of storage platform OS and may have other constraints not listed here.
+Note
+For Kubernetes 1.12 and earlier please see legacy FlexVolume drivers, do note that the FlexVolume drivers are being deprecated.
+Release highlights:
+StorageClasses
StorageClass
parameter)StorageClass
parametersaccessMode
handlingUpgrade considerations:
+importVol
parameter has been renamed importVolumeName
for HPE Alletra Storage MP and Alletra 9000/Primera/3PARnote
+HPE CSI Driver v2.5.0 is deployed with v2.5.1 of the Helm chart and Operator
+Kubernetes | +1.27-1.301 | +
---|---|
Helm Chart | +v2.5.1 on ArtifactHub | +
Operators | +
+ v2.5.1 on OperatorHub + v2.5.1 via OpenShift console + |
+
Worker OS | +
+ Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.14-4.16 + Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04 + SUSE Linux Enterprise Server 15 SP4, SP5, SP6 and SLE Micro4 equivalents + |
Platforms3 | +
+ Alletra Storage MP5 10.2.x - 10.4.x + Alletra OS 9000 9.3.x - 9.5.x + Alletra OS 5000/6000 6.0.0.x - 6.1.2.x + Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x + Primera OS 4.3.x - 4.5.x + 3PAR OS 3.3.x + |
+
Data protocols | +Fibre Channel, iSCSI | +
Filesystems | +XFS, ext3/ext4, btrfs, NFSv4* | +
Release notes | +v2.5.0 on GitHub | +
Blogs | ++ HPE CSI Driver for Kubernetes 2.5.0: Improved stateful workload resilience and robustness + | +
+ * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims
for volumeMode: Filesystem
.
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. While RHEL 7 and its derives will work, the host OS have been EOL'd and support is limited.
+ 3 = Learn about each data platform's team support commitment.
+ 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils
and reboot if the CSI node driver doesn't start.
+ 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
+
Release highlights:
+Kubernetes | +1.26-1.291 | +
---|---|
Helm Chart | +v2.4.2 on ArtifactHub | +
Operators | +
+ v2.4.2 on OperatorHub + v2.4.2 via OpenShift console + |
+
Worker OS | +
+ Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15 + Ubuntu 16.04, 18.04, 20.04, 22.04 + SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro4 equivalents + |
Platforms3 | +
+ Alletra Storage MP5 10.2.x - 10.4.x + Alletra OS 9000 9.3.x - 9.5.x + Alletra OS 5000/6000 6.0.0.x - 6.1.2.x + Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x + Primera OS 4.3.x - 4.5.x + 3PAR OS 3.3.x + |
+
Data protocols | +Fibre Channel, iSCSI | +
Filesystems | +XFS, ext3/ext4, btrfs, NFSv4* | +
Release notes | +v2.4.2 on GitHub | +
+ * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims
.
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
+ 3 = Learn about each data platform's team support commitment.
+ 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils
and reboot if the CSI node driver doesn't start.
+ 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
+
Release highlights:
+StorageClasses
with the NFS Server ProvisionerUpgrade considerations:
+Kubernetes | +1.26-1.291 | +
---|---|
Helm Chart | +v2.4.1 on ArtifactHub | +
Operators | +
+ v2.4.1 on OperatorHub + v2.4.1 via OpenShift console + |
+
Worker OS | +
+ Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15 + Ubuntu 16.04, 18.04, 20.04, 22.04 + SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro4 equivalents + |
Platforms3 | +
+ Alletra Storage MP5 10.2.x - 10.3.x + Alletra OS 9000 9.3.x - 9.5.x + Alletra OS 5000/6000 6.0.0.x - 6.1.2.x + Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x + Primera OS 4.3.x - 4.5.x + 3PAR OS 3.3.x + |
+
Data protocols | +Fibre Channel, iSCSI | +
Filesystems | +XFS, ext3/ext4, btrfs, NFSv4* | +
Release notes | +v2.4.1 on GitHub | +
Blogs | ++ Introducing HPE Alletra Storage MP to HPE CSI Driver for Kubernetes + | +
+ * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims
.
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
+ 3 = Learn about each data platform's team support commitment.
+ 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils
and reboot if the CSI node driver doesn't start.
+ 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
+
Release highlights:
+Upgrade considerations:
+Kubernetes | +1.25-1.281 | +
---|---|
Helm Chart | +v2.4.0 on ArtifactHub | +
Operators | +
+ v2.4.0 on OperatorHub + v2.4.0 via OpenShift console + |
+
Worker OS | +
+ RHEL2 7.x, 8.x, 9.x, RHCOS 4.12-4.14 + Ubuntu 16.04, 18.04, 20.04, 22.04 + SLES 15 SP3, SP4, SP5 + |
Platforms3 | +
+
+ Alletra OS 9000 9.3.x - 9.5.x + Alletra OS 5000/6000 6.0.0.x - 6.1.1.x + Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x + Primera OS 4.3.x - 4.5.x + 3PAR OS 3.3.x + |
+
Data protocols | +Fibre Channel, iSCSI | +
Filesystems | +XFS, ext3/ext4, btrfs, NFSv4* | +
Release notes | +v2.4.0 on GitHub | +
Blogs | ++ Introduction to new workload paradigms with HPE CSI Driver for Kubernetes + | +
+ * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims
.
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
+ 3 = Learn about each data platform's team support commitment.
+
HPE currently supports up to three minor releases of the HPE CSI Driver for Kubernetes.
+ +/etc/hpe-storage
directory persists across node upgrades or reboots. The path is relocatable using a custom Helm chart or deployment manifest by altering the mountPath
parameter for the directory.kubectl get csinodes -o yaml
and inspect .spec.drivers.allocatable
for "csi.hpe.com". The "count" element contains how many volumes the node can attach from the HPE CSI Driver (default is 100).If iSCSI CHAP is being used in the environment, consider the following.
+It's not recommended to retro fit CHAP into an existing environment where PersistentVolumes
are already provisioned and attached. If necessary, all iSCSI sessions needs to be logged out from and the CSI driver Helm chart needs to be installed with cluster-wide iSCSI CHAP credentials for iSCSI CHAP to be effective, otherwise existing non-authenticated sessions will be reused.
In 2.5.0 and later the CHAP credentials must be supplied by a separate Secret
. The Secret
may be supplied when installing the Helm Chart (the Secret
must exist prior) or referened in the StorageClass
.
When using CHAP with 2.4.2 or older the CHAP credentials were provided in clear text in the Helm Chart. To continue to use CHAP for those existing PersistentVolumes
, a CHAP Secret
needs to be created and referenced in the Helm Chart install.
New StorageClasses
may reference the same Secret
, it's recommended to use a different Secret
to distinguish legacy and new PersistentVolumes
.
How to enable iSCSI CHAP in the current version of the HPE CSI Driver is available in the user documentation.
+CHAP is an optional part of the initial deployment of the driver with parameters passed to Helm or the Operator. For object definitions, the CHAP_USER
and CHAP_PASSWORD
needs to be supplied to the csi-node-driver
. The CHAP username and secret is picked up in the hpenodeinfo
Custom Resource Definition (CRD). The CSP is under contract to create the user if it doesn't exist on the backend.
CHAP is a good measure to prevent unauthorized access to iSCSI targets, it does not encrypt data on the wire. CHAP secrets should be at least twelve charcters in length.
+In version 1.2.1 and below, the CSI driver did not support CHAP natively. CHAP must be enabled manually on the worker nodes before deploying the CSI driver on the cluster. This also needs to be applied to new worker nodes before they join the cluster.
+Different features mature at different rates. Refer to the official table of feature gates in the Kubernetes docs.
+The following guidelines appliy to which feature gates got introduced as alphas for the corresponding version of Kubernetes. For example, ExpandCSIVolumes
got introduced in 1.14 but is still an alpha in 1.15, hence you need to enable that feature gate in 1.15 as well if you want to use it.
--allow-privileged
flag must be set to true for the API server--allow-privileged
flag must be set to true for the API server--feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true
feature gate flags must be set to true for both the API server and kubelet for resize support--allow-privileged
flag must be set to true for the API server--feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true
feature gate flags must be set to true for both the API server and kubelet for resize support--feature-gates=CSIInlineVolume=true
feature gate flag must be set to true for both the API server and kubelet for pod inline volumes (Ephemeral Local Volumes) support--feature-gates=VolumePVCDataSource=true
feature gate flag must be set to true for both the API server and kubelet for Volume cloning support--feature-gates=GenericEphemeralVolume=true
feature gate flags needs to be passed to api-server, scheduler, controller-manager and kubelet to enable Generic Ephemeral VolumesOlder versions of the HPE CSI Driver for Kubernetes are kept here for reference. Check the CSI driver GitHub repo for the appropriate YAML files to declare on the cluster for the respective version of Kubernetes.
+Important
+The resources for CSPs, CRDs and ConfigMaps are available in each respective CSI driver version directory here. Use the below version mappings as reference.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.0/hpe-csi-k8s-1.25.yaml
+
+Note
+Latest supported CSI driver version is 2.4.0 for Kubernetes 1.25.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.3.0/hpe-csi-k8s-1.24.yaml
+
+Note
+Latest supported CSI driver version is 2.3.0 for Kubernetes 1.24.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.3.0/hpe-csi-k8s-1.23.yaml
+
+Note
+Latest supported CSI driver version is 2.3.0 for Kubernetes 1.23.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.2.0/hpe-csi-k8s-1.22.yaml
+
+Note
+Latest supported CSI driver version is 2.2.0 for Kubernetes 1.22.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.2.0/hpe-csi-k8s-1.21.yaml
+
+Note
+Latest supported CSI driver version is 2.2.0 for Kubernetes 1.21.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.1.1/hpe-csi-k8s-1.20.yaml
+
+Note
+Latest supported CSI driver version is 2.1.1 for Kubernetes 1.20.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.0.0/hpe-csi-k8s-1.19.yaml
+
+Note
+Latest supported CSI driver version is 2.0.0 for Kubernetes 1.19.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.0.0/hpe-csi-k8s-1.18.yaml
+
+Note
+Latest supported CSI driver version is 2.0.0 for Kubernetes 1.18.
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v1.4.0/hpe-csi-k8s-1.17.yaml
+
+Note
+Latest supported CSI driver version is 1.4.0 for Kubernetes 1.17.
+Note
+Latest supported CSI driver version is 1.3.0 for Kubernetes 1.16.
+Note
+Latest supported CSI driver version is 1.3.0 for Kubernetes 1.15.
+Note
+Latest supported CSI driver version is 1.2.0 for Kubernetes 1.14.
+Note
+Latest supported CSI driver version is 1.1.0 for Kubernetes 1.13.
+The HPE CSI Driver for Kubernetes may be accompanied by a Prometheus metrics endpoint to provide metadata about the volumes provisioned by the CSI driver and supporting backends. It's conventionally deployed with HPE Storage Array Exporter for Prometheus to provide a richer set of metrics from the backend storage systems.
+The exporter provides two metrics, "hpestoragecsi_volume_info" and "hpestoragecsi_backend_info".
+Metric | +Type | +Description | +Value | +
---|---|---|---|
hpestoragecsi_volume_info | +Gauge | +Indicates a volume whose provisioner is the HPE CSI Driver. | +1 | +
This metric includes the following labels.
+Label | +Description | +
---|---|
backend | +Backend hostname or IP address as defined in the Secret . |
+
pv | +PersistentVolume name. |
+
pvc | +PersistentVolumeClaim name. |
+
pvc_namespace | +PersistentVolumeClaim Namespace . |
+
storage_class | +StorageClass used to provision the PersistentVolume . |
+
volume | +Volume handle used by the backend storage system. | +
Metric | +Type | +Description | +Value | +
---|---|---|---|
hpestoragecsi_backend_info | +Gauge | +Indicates a storage system for which the HPE CSI driver is a provisioner. | +1 | +
This metric includes the following labels.
+Label | +Description | +
---|---|
backend | +Backend hostname or IP address as defined in the Secret . |
+
The exporter may be installed either via Helm or through YAML manifests with the object definitions. It's recommended to use Helm as it's more convenient to manage the configuration of the deployment.
+Note
+It's recommended to add a "cluster" target label to the deployment. The label is used in the provided Grafana dashboards.
+The Helm chart is available on Artifact Hub. Instructions on how to manage and install the chart is available within the chart documentation.
+ +Note
+It's highly recommended to install the CSI Info Metrics Provider with Helm.
+Since Rancher v2.7 and HPE CSI Driver for Kubernetes v2.3.0 it's possible to install the HPE CSI Info Metrics Provider through the Apps interface in Rancher to use with Rancher Monitoring. Please see the Rancher partner page for more information.
+Before beginning an advanced install, determine how Prometheus will be deployed on the Kubernetes cluster as it will dictate how the scrape target will be configured with either a Service
annotation or a ServiceMonitor
CRD.
Start by downloading the manifest, which needs to be modified before applying to the cluster.
+Supports HPE CSI Driver for Kubernetes 2.0.0 and later.
+
wget https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-info-metrics/v1.0.3/hpe-csi-info-metrics.yaml
+
+Optional ServiceMonitor
definition:
wget https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-info-metrics/v1.0.3/hpe-csi-info-metrics-service-monitor.yaml
+
+Update the main container parameters and optionally add service labels and annotations.
+In the "hpe-csi-info-metrics" Deployment
at .spec.template.spec.containers[0].args
in "hpe-csi-info-metrics.yaml":
args:
+ - "--telemetry.addr=:9099"
+ - "--telemetry.path=/metrics"
+ # IMPORTANT: Uncomment this argument to confirm your
+ # acceptance of the HPE End User License Agreement at
+ # https://www.hpe.com/us/en/software/licensing.html
+ #- "--accept-eula"
+
+Remove the #
in front of --accept-eula
to accept the HPE license restrictions.
In the "hpe-csi-info-metrics-service" Service
:
metadata:
+ name: hpe-csi-info-metrics-service
+ namespace: hpe-storage
+ labels:
+ app: hpe-csi-info-metrics
+ # Optionally add labels, for example to be included in Prometheus
+ # metrics via a targetLabels setting in a ServiceMonitor spec
+ #cluster: my-cluster
+ # Optionally add annotations, for example to configure it as a
+ # scrape target when using the Prometheus Helm chart's default
+ # configuration.
+ #annotations:
+ # "prometheus.io/scrape": "true"
+
+Apply the manifest:
+
kubectl apply -f hpe-csi-info-metrics.yaml
+
+Optionally, if using the Prometheus Operator, add any additional labels in "hpe-csi-info-metrics-service-monitor.yaml":
+
# Corresponding labels on the CSI Info Metrics service are added to
+ # the scraped metrics
+ #targetLabels:
+ # - cluster
+
+Apply the manifest:
+
kubectl apply -f hpe-csi-info-metrics-service-monitor.yaml
+
+Pro Tip!
+Avoid hand editing manifests by using the Helm chart.
+Example Grafana dashboards, provided as is, are hosted on grafana.com.
+ +The HPE CSI Driver for Kubernetes includes a Kubernetes Pod Monitor. Specifically it looks for Pods
with the label monitored-by: hpe-csi
and has NodeLost
status set on them. This usually occurs if a node becomes unresponsive or partioned due to a network outage. The Pod Monitor will delete the affected Pod
and associated HPE CSI Driver VolumeAttachment
to allow Kubernetes to reschedule the workload on a healthy node.
The Pod Monitor is mandatory and automatically applied for the RWX server Deployment
managed by the HPE CSI Driver. It may be used for any Pods
on the Kubernetes cluster to perform a more graceful automatic recovery rather than performing a manual intervention to resurrect stuck Pods
.
The Pod Monitor is part of the "hpe-csi-controller" Deployment
served by the "hpe-csi-driver" container. It's by default enabled and the Pod Monitor interval is set to 30 seconds.
Edit the CSI driver deployment to change the interval or disable the Pod Monitor.
+
kubectl edit -n hpe-storage deploy/hpe-csi-controller
+
+The parameters that control the "hpe-csi-driver" are the following:
+
- --pod-monitor
+ - --pod-monitor-interval=30
+
+Enable the Pod Monitor for a single replica Deployment
by labeling the Pod
(assumes an existing PVC name "my-pvc" exists).
apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: my-app
+ labels:
+ app: my-app
+spec:
+ replicas: 1
+ strategy:
+ type: Recreate
+ selector:
+ matchLabels:
+ app: my-app
+ template:
+ metadata:
+ labels:
+ monitored-by: hpe-csi
+ app: my-app
+ spec:
+ containers:
+ - image: busybox
+ name: busybox
+ command:
+ - "sleep"
+ - "4800"
+ volumeMounts:
+ - mountPath: /data
+ name: my-vol
+ volumes:
+ - name: my-vol
+ persistentVolumeClaim:
+ claimName: my-pvc
+
+Danger
+It's imperative that failure scenarios that are being mitigated for the application are properly tested before put into production. It's up to the CSP to fence the PersistentVolume
attached to an isolated node when a new "NodePublish" request comes in. Node isolation is the most dangerous scenario as the workload continues to run on the node when disconnected from the outside world. Simply shutdown the kubelet to test this scenario and ensure the block device become inaccessible to the isolated node.
node.kubernetes.io/not-ready
and node.kubernetes.io/unreachable
) to fully recover during a node failure or network partition using the Pod Monitor for Pods
with PersistentVolumeClaims
.StatefulSets
due to an upstream API update that did not take the force flag into account.Deployment
configured with .spec.strategy.type
"Recreate" or a StatefulSet
is unsupported. The consequence of using other settings and controllers may have undesired side effects such as rendering "multi-attach" errors for PersistentVolumeClaims
and may delay recovery.The documentation in this section illustrates officially HPE supported procedures to perform maintenance tasks on the CSI driver outside the scope of deploying and uninstalling the driver.
+Persistent volumes created with v2.1.1 or below using volume encryption, the CSI driver use LUKS2 (WikiPedia: Linux Unified Key Setup) and can't expand the PersistentVolumeClaim
. With v2.2.0 and above, LUKS1 is used and the CSI driver is capable of expanding the PVC
.
This procedure migrate (copy) data from LUKS2 to LUKS1 PVCs to allow expansion of the volume.
+Note
+It's not a limitation of LUKS2 to not allow expansion but rather how the CSI driver interact with the host.
+These are the assumptions made throughout this procedure.
+kubectl
, curl
, jq
and yq
.PersistentVolumes
.ReadWriteOnce
PVCs
are covered.PVC
annotations.Tip
+There are many different ways to copy PVCs
. These steps outlines and uses one particular method developed and tested by HPE and similar workflows may be applied with other tools and procedures.
First, identify the PersistentVolume
to migrate from and set shell variables.
export OLD_SRC_PVC=<insert your existing PVC name here>
+export OLD_SRC_PV=$(kubectl get pvc -o json | \
+ jq -r ".items[] | \
+ select(.metadata.name | \
+ test(\"${OLD_SRC_PVC}\"))".spec.volumeName)
+
+Important
+Ensure these shell variables are set at all times.
+In order to copy data out of a PVC
, the running workload needs to be disassociated with the PVC
. It's not possible to scale the replicas to zero, the exception being ReadWriteMany
PVCs
which could lead to data inconsistency problems. These procedures assumes application consistency by having the workload shut down.
It's out of scope for this procedure to demonstrate how to shut down a particular workload. Ensure there are no volumeattachments
associated with the PersistentVolume
.
kubectl get volumeattachment -o json | \
+ jq -r ".items[] | \
+ select(.spec.source.persistentVolumeName | \
+ test(\"${OLD_SRC_PV}\"))".spec.source
+
+Tip
+For large volumeMode: Filesystem
PVCs
where copying data may take days, it's recommended to use the Optional Workflow with Filesystem Persistent Volume Claims that utilizes the PVC
dataSource
capability.
Create a new PVC
named "new-pvc" with enough space to host the data from the old source PVC
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: new-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 32Gi
+ volumeMode: Filesystem
+
+Important
+If the source PVC
is a raw block volume, ensure volumeMode: Block
is set on the new PVC
.
Edit and set the shell variables for the newly created PVC
.
export NEW_DST_PVC_SIZE=32Gi
+export NEW_DST_PVC_VOLMODE=Filesystem
+export NEW_DST_PVC=new-pvc
+export NEW_DST_PV=$(kubectl get pvc -o json | \
+ jq -r ".items[] | \
+ select(.metadata.name | \
+ test(\"${NEW_DST_PVC}\"))".spec.volumeName)
+
+Hint
+The PVC
name "new-pvc" is a placeholder name. When the procedure is done, the PVC
will have its original name restored.
At this point, there should be six shell variables declared. Example:
+
$ env | grep _PV
+NEW_DST_PVC_SIZE=32Gi
+NEW_DST_PVC=new-pvc
+OLD_SRC_PVC=old-pvc <-- This should be the original name of the PVC
+NEW_DST_PVC_VOLMODE=Filesystem
+NEW_DST_PV=pvc-ad7a05a9-c410-4c63-b997-51fb9fc473bf
+OLD_SRC_PV=pvc-ca7c2f64-641d-4265-90f8-4aed888bd2c5
+
+Regardless of the retainPolicy
set in the StorageClass
, ensure the persistentVolumeReclaimPolicy
is set to "Retain" for both PVs
.
kubectl patch pv/${OLD_SRC_PV} pv/${NEW_DST_PV} \
+ -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+
+Data Loss Warning
+It's EXTREMELY important no errors are returned from the above command. It WILL lead to data loss.
+Validate the "persistentVolumeReclaimPolicy".
+
kubectl get pv/${OLD_SRC_PV} pv/${NEW_DST_PV} -o json | \
+ jq -r ".items[] | \
+ select(.metadata.name)".spec.persistentVolumeReclaimPolicy
+
+Important
+The above command should output nothing but two lines with the word "Retain" on it.
+In this phase, the data will be copied from the original PVC
to the new PVC
with a Job
submitted to the cluster. Different tools are being used to perform the copy operation, ensure to pick the correct volumeMode
.
curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy-file.yaml | \
+ yq "( select(.spec.template.spec.volumes[] | \
+ select(.name == \"src-pv\") | \
+ .persistentVolumeClaim.claimName = \"${OLD_SRC_PVC}\")
+ " | kubectl apply -f-
+
+Wait for the Job
to complete.
kubectl get job.batch/pvc-copy-file -w
+
+Once the Job
has completed, validate exit status and log files.
kubectl get job.batch/pvc-copy-file -o jsonpath='{.status.succeeded}'
+kubectl logs job.batch/pvc-copy-file
+
+Delete the Job
from the cluster.
kubectl delete job.batch/pvc-copy-file
+
+Proceed to restart the workload.
+
curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy-block.yaml | \
+ yq "( select(.spec.template.spec.volumes[] | \
+ select(.name == \"src-pv\") | \
+ .persistentVolumeClaim.claimName = \"${OLD_SRC_PVC}\")
+ " | kubectl apply -f-
+
+Wait for the Job
to complete.
kubectl get job.batch/pvc-copy-block -w
+
+Hint
+Data is copied block for block, verbatim, regardless of how much application data is stored in the block devices.
+Once the Job
has completed, validate exit status and log files.
kubectl get job.batch/pvc-copy-block -o jsonpath='{.status.succeeded}'
+kubectl logs job.batch/pvc-copy-block
+
+Delete the Job
from the cluster.
kubectl delete job.batch/pvc-copy-block
+
+Proceed to restart the workload.
+This step requires both the old source PVC
and the new destination PVC
to be deleted. Once again, ensure the correct persistentVolumeReclaimPolicy
is set on the PVs
.
kubectl get pv/${OLD_SRC_PV} pv/${NEW_DST_PV} -o json | \
+ jq -r ".items[] | \
+ select(.metadata.name)".spec.persistentVolumeReclaimPolicy
+
+Important
+The above command should output nothing but two lines with the word "Retain" on it, if not revisit Important Validation Steps to apply the policy and ensure environment variables are set correctly.
+Delete the PVCs
.
kubectl delete pvc/${OLD_SRC_PVC} pvc/${NEW_DST_PVC}
+
+Next, allow the new PV
to be reclaimed.
kubectl patch pv ${NEW_DST_PV} -p '{"spec":{"claimRef": null }}'
+
+Next, create a PVC
with the old source name and ensure it matches the size of the new destination PVC
.
curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy.yaml | \
+ yq ".spec.volumeName = \"${NEW_DST_PV}\" | \
+ .metadata.name = \"${OLD_SRC_PVC}\" | \
+ .spec.volumeMode = \"${NEW_DST_PVC_VOLMODE}\" | \
+ .spec.resources.requests.storage = \"${NEW_DST_PVC_SIZE}\" \
+ " | kubectl apply -f-
+
+Verify the new PVC
is "Bound" to the correct PV
.
kubectl get pvc/${OLD_SRC_PVC} -o json | \
+ jq -r ". | \
+ select(.spec.volumeName == \"${NEW_DST_PV}\").metadata.name"
+
+If the command is successful, it should output your original PVC
name.
At this point the original workload should be deployed, verified and resumed.
+Optionally, the old source PV
may be removed.
kubectl delete pv/${OLD_SRC_PV}
+
+If there's a lot of content (millions of files, terabytes of data) that needs to be transferred in a volumeMode: Filesystem
PVC
it's recommended to transfer content incrementally. This is achieved by substituting the "old-pvc" with a dataSource
clone of the running workload and perform the copy from the clone onto the "new-pvc".
After the first transfer completes, the copy job may be recreated as many times as needed with a fresh clone of "old-pvc" until the downtime window has shrunk to an acceptable duration. For the final transfer, the actual source PVC
will be used instead of the clone.
This is an example PVC
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: clone-of-pvc
+spec:
+ dataSource:
+ name: this-is-the-current-prod-pvc
+ kind: PersistentVolumeClaim
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 32Gi
+
+Tip
+The capacity of the dataSource
clone must match the original PVC
.
Enabling and setting up the CSI snapshotter and related CRDs
is not necessary but it's recommended to be familiar with using CSI snapshots.
In the event the CSI driver contains updates to the NFS Server Provisioner, any running NFS server needs to be updated manually.
+Any prior deployed NFS servers may be upgraded to v2.5.0.
+No changes to NFS Server Provisioner image between v2.4.1 and v2.4.2.
+Any prior deployed NFS servers may be upgraded to v2.4.1.
+Important
+With v2.4.0 and onwards the NFS servers are deployed with default resource limits and in v2.5.0 resource requests were added. Those won't be applied on running NFS servers, only new ones.
+Namespace
.kubectl
, yq
and curl
.Namespace
.Seealso
+If NFS Deployments
are scattered across Namespaces
, use the Validation steps to find where they reside.
When patching the NFS Deployments
, the Pods
will restart and cause a pause in I/O for the NFS clients with active mounts. The clients will recover gracefully once the NFS Pod
is running again.
Patch all NFS Deployments
with the following.
curl -s https://scod.hpedev.io/csi_driver/examples/operations/patch-nfs-server-2.5.0.yaml | \
+ kubectl patch -n hpe-nfs \
+ $(kubectl get deploy -n hpe-nfs -o name) \
+ --patch-file=/dev/stdin
+
+Tip
+If it's desired to patch one NFS Deployment
at a time, replace the shell substituion with a Deployment
name.
This command will list all "hpe-nfs" Deployments
across the entire cluster. Each Deployment
should be using v3.0.5 of the "nfs-provisioner" image after the uprade is complete.
kubectl get deploy -A -o yaml | \
+ yq -r '.items[] | [] + { "Namespace": select(.spec.template.spec.containers[].name == "hpe-nfs").metadata.namespace, "Deployment": select(.spec.template.spec.containers[].name == "hpe-nfs").metadata.name, "Image": select(.spec.template.spec.containers[].name == "hpe-nfs").spec.template.spec.containers[].image }'
+
+Note
+The above line is very long.
+With the release of HPE CSI Driver v2.4.0 it's possible to completely disable the node conformance and node configuration performed by the CSI node driver at startup. This transfers the responsibilty from the HPE CSI Driver to the Kubernetes cluster administrator to ensure worker nodes boot with a supported configuration.
+Important
+This feature is mainly for users who require 100% control of the worker nodes.
+There are two stages of initialization the administrator can control through parameters in the Helm chart.
+The node conformance runs with the entrypoint of the node driver container. The conformance inserts and runs a systemd service on the node that installs all required packages on the node to allow nodes to attach block storage devices and mount NFS exports. It starts all the required services and configure an important udev rule on the worker node.
+This flag was intended to allow administrators to run the CSI driver on nodes with an unsupported or unconfigured package manager.
+If node conformance needs to be disabled for any reason, these packages and services needs to be installed and running prior to installing the HPE CSI Driver:
+Package names and services vary greatly between different Linux distributions and it's the system administrator's duty to ensure these are available to the HPE CSI Driver.
+When disabling node configuration the CSI node driver will not touch the node at all. Besides indirectly disabling node conformance, all attempts to write configuration files or manipulate services during runtime are disabled.
+These steps are REQUIRED for disabling either node configuration or conformance.
+On each current and future worker node in the cluster:
+
# Don't let udev automatically scan targets(all luns) on Unit Attention.
+# This will prevent udev scanning devices which we are attempting to remove.
+
+if [ -f /lib/udev/rules.d/90-scsi-ua.rules ]; then
+ sed -i 's/^[^#]*scan-scsi-target/#&/' /lib/udev/rules.d/90-scsi-ua.rules
+ udevadm control --reload-rules
+fi
+
+Skip this step if only Fibre Channel is being used. This step is only required when node configuration is disabled.
+This example is taken from a Rocky Linux 9.2 node with the HPE parameters applied. Certain parameters may differ for other distributions of either iSCSI or the host OS.
+Note
+The location of this file varies between Linux and iSCSI distributions.
+Ensure iscsid
is stopped.
systemctl stop iscsid
+
+Download: /etc/iscsi/iscsid.conf
+
iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket
+node.startup = manual
+node.leading_login = No
+node.session.timeo.replacement_timeout = 10
+node.conn[0].timeo.login_timeout = 15
+node.conn[0].timeo.logout_timeout = 15
+node.conn[0].timeo.noop_out_interval = 5
+node.conn[0].timeo.noop_out_timeout = 10
+node.session.err_timeo.abort_timeout = 15
+node.session.err_timeo.lu_reset_timeout = 30
+node.session.err_timeo.tgt_reset_timeout = 30
+node.session.initial_login_retry_max = 8
+node.session.cmds_max = 512
+node.session.queue_depth = 256
+node.session.xmit_thread_priority = -20
+node.session.iscsi.InitialR2T = No
+node.session.iscsi.ImmediateData = Yes
+node.session.iscsi.FirstBurstLength = 262144
+node.session.iscsi.MaxBurstLength = 16776192
+node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
+node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
+discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
+node.conn[0].iscsi.HeaderDigest = None
+node.session.nr_sessions = 1
+node.session.reopen_max = 0
+node.session.iscsi.FastAbort = Yes
+node.session.scan = auto
+
+Pro tip!
+When nodes are provisioned from some sort of templating system with iSCSI pre-installed, it's notoriously common that nodes are provisioned with identical IQNs. This will lead to device attachment problems that aren't obvious to the user. Make sure each node has a unique IQN.
+Ensure iscsid
is running and enabled:
systemctl enable --now iscsid
+
+Seealso
+Some Linux distributions requires the iscsi_tcp
kernel module to be loaded. Where kernel modules are loaded varies between Linux distributions.
This step is only required when node configuration is disabled.
+The defaults section of the configuration file is merely a preference, make sure to leave the device and blacklist stanzas intact when potentially adding more entries from foreign devices.
+Note
+The location of this file varies between Linux and iSCSI distributions.
+Ensure multipathd
is stopped.
systemctl stop multipathd
+
+Download: /etc/multipath.conf
+
defaults {
+ user_friendly_names yes
+ find_multipaths no
+ uxsock_timeout 10000
+}
+blacklist {
+ devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
+ devnode "^hd[a-z]"
+ device {
+ product ".*"
+ vendor ".*"
+ }
+}
+blacklist_exceptions {
+ property "(ID_WWN|SCSI_IDENT_.*|ID_SERIAL)"
+ device {
+ vendor "Nimble"
+ product "Server"
+ }
+ device {
+ product "VV"
+ vendor "3PARdata"
+ }
+ device {
+ vendor "TrueNAS"
+ product "iSCSI Disk"
+ }
+ device {
+ vendor "FreeNAS"
+ product "iSCSI Disk"
+ }
+}
+devices {
+ device {
+ product "Server"
+ rr_min_io_rq 1
+ dev_loss_tmo infinity
+ path_checker tur
+ rr_weight uniform
+ no_path_retry 30
+ path_selector "service-time 0"
+ failback immediate
+ fast_io_fail_tmo 5
+ vendor "Nimble"
+ hardware_handler "1 alua"
+ path_grouping_policy group_by_prio
+ prio alua
+ }
+ device {
+ path_grouping_policy group_by_prio
+ path_checker tur
+ rr_weight "uniform"
+ prio alua
+ failback immediate
+ hardware_handler "1 alua"
+ no_path_retry 18
+ fast_io_fail_tmo 10
+ path_selector "round-robin 0"
+ vendor "3PARdata"
+ dev_loss_tmo infinity
+ detect_prio yes
+ features "0"
+ rr_min_io_rq 1
+ product "VV"
+ }
+ device {
+ path_selector "queue-length 0"
+ rr_weight priorities
+ uid_attribute ID_SERIAL
+ vendor "TrueNAS"
+ product "iSCSI Disk"
+ path_grouping_policy group_by_prio
+ }
+ device {
+ path_selector "queue-length 0"
+ hardware_handler "1 alua"
+ rr_weight priorities
+ uid_attribute ID_SERIAL
+ vendor "FreeNAS"
+ product "iSCSI Disk"
+ path_grouping_policy group_by_prio
+ }
+}
+
+Ensure multipathd
is running and enabled:
systemctl enable --now multipathd
+
+While both disabling conformance and configuration parameters lends itself to a more predictable behaviour when deploying nodes from templates with less runtime configuration, it's still not a complete solution for having immutable nodes. The CSI node driver creates a unique identity for the node and stores it in /etc/hpe-storage/node.gob
. This file must persist across reboots and redeployments of the node OS image. Immutable Linux distributions such as CoreOS persist the /etc
directory, some don't.
In certain situations it's practical to expose the NFS exports outside the Kubernetes cluster to allow external applications to access data as part of an ETL (Extract, Transform, Load) pipeline or similar.
+Since this is an untested feature with questionable security standards, HPE does not recommend using this facility in production at this time. Reach out to your HPE account representative if this is a critical feature for your workloads.
+Danger
+The exports on the NFS servers does not have any network Access Control Lists (ACL) without root squash. Anyone with an NFS client that can reach the load balancer IP address have full access to the filesystem.
+The NFS server Service
must be transformed into a "LoadBalancer".
In this example we'll assume a "RWX" PersistentVolumeClaim
named "my-pvc-1" and NFS resources deployed in the default Namespace
, "hpe-nfs".
Retrieve NFS UUID
+
export UUID=$(kubectl get pvc my-pvc-1 -o jsonpath='{.spec.volumeName}{"\n"}' | awk -Fpvc- '{print $2}')
+
+Patch the NFS Service
:
kubectl patch -n hpe-nfs svc/hpe-nfs-${UUID} -p '{"spec":{"type": "LoadBalancer"}}'
+
+The Service
will be assigned an external IP address by the load balancer deployed in the cluster. If there is no load balancer deployed, a MetalLB example is provided below.
Deploying MetalLB is outside the scope of this document. In this example, MetalLB was deployed on OpenShift 4.16 (Kubernetes v1.29) using the Operator provided by Red Hat in the "metallb-system" Namespace
.
Determine the IP address range that will be assigned to the load balancers. In this example, 192.168.1.40 to 192.168.1.60 is being used. Note that the worker nodes in this cluster already have reachable IP addresses in the 192.168.1.0/24 network, which is a requirement.
+Create the MetalLB instances, IP address pool and Layer 2 advertisement.
+
---
+apiVersion: metallb.io/v1beta1
+kind: MetalLB
+metadata:
+ name: metallb
+ namespace: metallb-system
+
+---
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+ namespace: metallb-system
+ name: hpe-nfs-servers
+spec:
+ protocol: layer2
+ addresses:
+ - 192.168.1.40-192.168.1.60
+
+---
+apiVersion: metallb.io/v1beta1
+kind: L2Advertisement
+metadata:
+ name: l2advertisement
+ namespace: metallb-system
+spec:
+ ipAddressPools:
+ - hpe-nfs-servers
+
+Shortly, the external IP address of the NFS Service
patched in the previous steps should have an IP address assigned.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
+hpe-nfs-UUID LoadBalancer 172.30.217.203 192.168.1.40 <long list of ports>
+
+Mounting the NFS export externally is now possible.
+As root:
+
mount -t nfs4 192.168.1.40:/export /mnt
+
+Note
+If the NFS server is rescheduled in the Kubernetes cluster, the load balancer IP address follows, and the client will recover and resume IO after a few minutes.
+In certain situations is desirable to run the NFS Server Provisioner image without the dual PersistentVolumeClaim
(PVC) semantic in a more static fashion on top of a PVC
provisioned by a non-HPE CSI Driver StorageClass
.
Notice
+Since HPE CSI Driver for Kubernetes v2.4.1, this functionality is built into the CSI driver. See Using a Foreign StorageClass how to use it.
+Namespace
without risk of conflict but not recommended.Pods
status for the "NodeLost" condition is not included with the standalone NFS server and recovery is at the mercy of the underlying storage platform and driver.It's assumed during the creation steps that a Kubernetes cluster is available with enough permissions to deploy privileged Pods
with SYS_ADMIN
and DAC_READ_SEARCH
capabilities. All steps are run in a terminal with kubectl
and git
in in the path.
StorageClass
declared on the clustercsi.hpe.com/hpe-nfs: "true"
kubectl
and Kubernetes v1.21 or newerNFS server configurations are managed with the kustomize templating system. Clone this repository to get started and change working directory.
+
git clone https://github.com/hpe-storage/scod
+cd scod/docs/csi_driver/examples/standalone_nfs
+
+In the current directory, various manifests and configuration directives exist to deploy and manage NFS servers.
+Run tree .
in the current directory:
.
+├── base
+│ ├── configmap.yaml
+│ ├── deployment.yaml
+│ ├── environment.properties
+│ ├── kustomization.yaml
+│ ├── pvc.yaml
+│ ├── service.yaml
+│ └── values.yaml
+└── overlays
+ └── example
+ ├── deployment.yaml
+ ├── environment.properties
+ └── kustomization.yaml
+
+4 directories, 10 files
+
+Important
+The current directory is now the "home" for the remainder of this guide.
+Copy the "example" overlay into a new directory. In the examples "my-server" is used.
+
cp -a overlays/example overlays/my-server
+
+Edit both "environment.properties" and "kustomization.yaml" in the newly created overlay. Also pay attention to if the remote Pods
mounting the NFS export are running as a non-root user, if that's the case, the group ID is needed of those Pods
(customizable per NFS server).
# This is the domain associated with worker node (not inter-cluster DNS)
+CLUSTER_NODE_DOMAIN_NAME=my-domain.example.com
+
+# The size of the backend RWO claim
+PERSISTENCE_SIZE=16Gi
+
+# Default resource limits for the NFS server
+NFS_SERVER_CPU_LIMIT=1
+NFS_SERVER_MEMORY_LIMIT=2Gi
+
+The "CLUSTER_NODE_DOMAIN_NAME" variable refers to the DNS domain name that the worker node is resolvable in, not the Kubernetes cluster DNS.
+The "PERSISTENCE_SIZE" is the backend PVC
size expressed in the same format accepted by a PVC
.
Configuring resource limits are optional but recommended for high performance workloads.
+Change the resource prefix in "kustomization.yaml" either with an editor or sed
:
sed -i"" 's/example-/my-server-/g' overlays/my-server/kustomization.yaml
+
+Seealso
+If the NFS server needs to be deployed in a different Namespace
than the current, edit and uncomment the "namespace" parameter in overlays/my-server/kustomization.yaml
.
The default "fsGroup" is mapped to "nobody" (gid=65534) which allows remote Pods
run as the root user to write in the NFS export. This may not be desirable as best practices dictate that Pods
should run with a user id larger than 99.
To allow user Pods
to write in the export, edit overlays/my-server/deployment.yaml
and change the "fsGroup" to the corresponding gid running in the remote Pod
.
apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: hpe-nfs
+spec:
+ template:
+ spec:
+ securityContext:
+ fsGroup: 65534
+ fsGroupChangePolicy: OnRootMismatch
+
+Deploy the NFS server by issuing kubectl apply -k overlays/my-server
:
configmap/my-server-hpe-nfs-conf created
+configmap/my-server-local-conf-97898bftbh created
+service/my-server-hpe-nfs created
+persistentvolumeclaim/my-server-hpe-nfs created
+deployment.apps/my-server-hpe-nfs created
+
+Inspect the resources with kubectl get -k overlays/my-server
:
NAME DATA AGE
+configmap/my-server-hpe-nfs-conf 1 59s
+configmap/my-server-local-conf-97898bftbh 2 59s
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/my-server-hpe-nfs ClusterIP 10.100.200.11 <none> 49000/TCP,2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,111/TCP,111/UDP,662/TCP,662/UDP,875/TCP,875/UDP 59s
+
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+persistentvolumeclaim/my-server-hpe-nfs Bound pvc-ae943116-d0af-4696-8b1b-1dcf4316bdc2 18Gi RWO vsphere-sc 58s
+
+NAME READY UP-TO-DATE AVAILABLE AGE
+deployment.apps/my-server-hpe-nfs 1/1 1 1 59s
+
+Make a note of the IP address assigned to "service/my-server-hpe-nfs", that is the IP address needed to mount the NFS export.
+Tip
+If the Kubernetes cluster DNS service is resolvable from the worker node host OS, it possible to use the cluster DNS address to mount the Service
, in this example that would be "my-server-hpe-nfs.default.svc.cluster.local".
There are two ways to mount the NFS server.
+PersistentVolume
with the NFS server details and mount options and manually claiming the PV
with a PVC
using the .spec.volumeName
parameterThis is the most elegant solution as it does not require any intermediary PVC
or PV
and directly refers to the NFS server within a workload stanza.
This is an example from a StatefulSet
workload controller having multiple replicas.
...
+spec:
+ replicas: 3
+ template:
+ ...
+ spec:
+ containers:
+ volumeMounts:
+ - name: vol
+ mountPath: /vol
+ ...
+ volumes:
+ - name: vol
+ nfs:
+ server: 10.100.200.11
+ path: /export
+
+Important
+Replace .spec.template.spec.volumes[].nfs.server
with IP address from the actual Service
IP address and not the examples.
Refer to the official Kubernetes documentation for the built-in NFS client on how to perform static provisioning of NFS PVs
and PVCs
.
If the StorageClass
and underlying CSI driver supports volume expansion, simply edit overlays/my-server/environment.properties
with the new (larger) size and issue kubectl apply -k overlays/my-server
to expand the volume.
Ensure no workloads have active mounts against the NFS server Service
. If there are, those Pods
will be stuck indefinitely.
Run kubectl delete -k overlays/my-server
:
configmap "my-server-hpe-nfs-conf" deleted
+configmap "my-server-local-conf-97898bftbh" deleted
+service "my-server-hpe-nfs" deleted
+persistentvolumeclaim "my-server-hpe-nfs" deleted
+deployment.apps "my-server-hpe-nfs" deleted
+
+Caution
+Unless the StorageClass
"reclaimPolicy" is set to "Retain". The underlying PV
will be deleted from the cluster and data needs to be restored from backups if needed.
At this point the CSI driver and CSP should be installed and configured.
+Important
+Most examples below assumes there's a Secret
named "hpe-backend" in the "hpe-storage" Namespace
. Learn how to add Secrets
in the Deployment section.
Tip
+If you're familiar with the basic concepts of persistent storage on Kubernetes and are looking for an overview of example YAML declarations for different object types supported by the HPE CSI driver, visit the source code repo on GitHub.
+The HPE CSI Driver for Kubernetes is primarily a ReadWriteOnce
(RWO) CSI implementation for block based storage. The CSI driver also supports ReadWriteMany
(RWX) and ReadOnlyMany
(ROX) using a NFS Server Provisioner. It's enabled by transparently deploying a NFS server for each Persistent Volume Claim (PVC) against a StorageClass
where it's enabled, that in turn is backed by a traditional RWO claim. Most of the examples featured on SCOD are illustrated as RWO using block based storage, but many of the examples apply in the majority of use cases.
Access Mode | +Abbreviation | +Use Case | +
---|---|---|
ReadWriteOnce | +RWO | +For high performance Pods where access to the PVC is exclusive to one host at a time. May use either block based storage or the NFS Server Provisioner where connectivity to the data fabric is limited to a few worker nodes in the Kubernetes cluster. |
+
ReadWriteOncePod | +RWOP | +Exclusive access by a single Pod . Not currently supported by the HPE CSI Driver. |
+
ReadWriteMany | +RWX | +For shared filesystems where multiple Pods in the same Namespace need simultaneous access to a PVC across multiple nodes. |
+
ReadOnlyMany | +ROX | +Read-only representation of RWX. | +
ReadWriteOnce and access by multiple Pods
+Pods
that require access to the same "ReadWriteOnce" (RWO) PVC needs to reside on the same node and Namespace
by using selectors or affinity scheduling rules applied when deployed. If not configured correctly, the Pod
will fail to start and will throw a "Multi-Attach" error in the event log if the PVC is already attached to a Pod
that has been scheduled on a different node within the cluster.
The NFS Server Provisioner is not enabled by the default StorageClass
and needs a custom StorageClass
. The following sections are tailored to help understand the NFS Server Provisioner capabilities.
StorageClass
parametersSupport for VolumeSnapshotClasses
and VolumeSnapshots
is available from Kubernetes 1.17+. The snapshot CRDs and the common snapshot controller needs to be installed manually. As per Kubernetes TAG Storage, these should not be installed as part of a CSI driver and should be deployed by the Kubernetes cluster vendor or user.
Ensure the snapshot CRDs and common snapshot controller hasn't been installed already.
+
kubectl get crd volumesnapshots.snapshot.storage.k8s.io \
+ volumesnapshotcontents.snapshot.storage.k8s.io \
+ volumesnapshotclasses.snapshot.storage.k8s.io
+
+Vendors may package, name and deploy the common snapshot controller using their own naming conventions. Run the command below and look for workload names that contain "snapshot".
+
kubectl get sts,deploy -A
+
+If no prior CRDs or controllers exist, install the snapshot CRDs and common snapshot controller (once per Kubernetes cluster, independent of any CSI drivers).
+# Kubernetes 1.27-1.30
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v8.0.1 -b hpe-csi-driver-v2.5.0
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
# Kubernetes 1.26-1.29
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v6.3.3 -b hpe-csi-driver-v2.4.2
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
# Kubernetes 1.26-1.29
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v6.3.3 -b hpe-csi-driver-v2.4.1
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
# Kubernetes 1.25-1.28
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v6.2.2 -b hpe-csi-driver-v2.4.0
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
# Kubernetes 1.23-1.26
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v5.0.1 -b hpe-csi-driver-v2.3.0
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
Tip
+The provisioning section contains examples on how to create VolumeSnapshotClass
and VolumeSnapshot
objects.
Each CSP has its own set of unique parameters to control the provisioning behavior. These examples serve as a base StorageClass
example for each version of Kubernetes. See the respective CSP for more elaborate examples.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+ name: hpe-standard
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ description: "Volume created by the HPE CSI Driver for Kubernetes"
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
+Important
+Replace "hpe-backend" with a Secret
relevant to the backend being referenced.
Common HPE CSI Driver StorageClass
parameters across CSPs.
Parameter | +String | +Description | +
---|---|---|
accessProtocol | +Text | +The access protocol to use when accessing the persistent volume ("fc" or "iscsi"). Default: "iscsi" | +
chapSecretName | +Text | +Name of Secret to use for iSCSI CHAP. |
+
chapSecretNamespace | +Text | +Namespace of Secret to use for iSCSI CHAP. |
+
description1 | +Text | +Text to be added to the volume PV metadata on the backend CSP. Default: "" | +
csi.storage.k8s.io/fstype | +Text | +Filesystem to format new volumes with. XFS is preferred, ext3, ext4 and btrfs is supported. Defaults to "ext4" if omitted. | +
fsOwner | +userId:groupId | +The user id and group id that should own the root directory of the filesystem. | +
fsMode | +Octal digits | +1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. | +
fsCreateOptions | +Text | +A string to be passed to the mkfs command. These flags are opaque to CSI and are therefore not validated. To protect the node, only the following characters are allowed: [a-zA-Z0-9=, \-] . |
+
fsRepair | +Boolean | +When set to "true", if a mount fails and filesystem corruption is detected, this parameter will control if an actual repair will be attempted. Default: "false". Note: fsRepair is unable to detect or remedy corrupted filesystems that are already mounted. Data loss may occur during the attempt to repair the filesystem. |
+
nfsResources | +Boolean | +When set to "true", requests against the StorageClass will create resources for the NFS Server Provisioner (Deployment , RWO PVC and Service ). Required parameter for ReadWriteMany and ReadOnlyMany accessModes. Default: "false" |
+
nfsForeignStorageClass | +Text | +Provision NFS servers on PVCs from a different StorageClass . See Using a Foreign StorageClass |
+
nfsNamespace | +Text | +Resources are by default created in the "hpe-nfs" Namespace . If CSI VolumeSnapshotClass and dataSource functionality is required on the requesting claim, requesting and backing PVC need to exist in the requesting Namespace . A value of "csi.storage.k8s.io/pvc/namespace" will provision resources in the requesting PVC Namespace . |
+
nfsNodeSelector | +Text | +Customize the nodeSelector label value for the NFS Pod . The default behavior is to omit the nodeSelector . |
+
nfsMountOptions | +Text | +Customize NFS mount options for the Pods to the server Deployment . Uses mount command defaults from the node. |
+
nfsProvisionerImage | +Text | +Customize provisioner image for the server Deployment . Default: Official build from "hpestorage/nfs-provisioner" repo |
+
nfsResourceRequestsCpuM | +Text | +Specify CPU requests for the server Deployment in milli CPU. Default: "500m". Example: "4000m" |
+
nfsResourceRequestsMemoryMi | +Text | +Specify memory requests (in megabytes) for the server Deployment . Default: "512Mi". Example: "4096Mi". |
+
nfsResourceLimitsCpuM | +Text | +Specify CPU limits for the server Deployment in milli CPU. Default: "1000m". Example: "4000m" |
+
nfsResourceLimitsMemoryMi | +Text | +Specify memory limits (in megabytes) for the server Deployment . Default: "2048Mi". Example: "500Mi". Recommended minimum: "2048Mi". |
+
hostEncryption | +Boolean | +Direct the CSI driver to invoke Linux Unified Key Setup (LUKS) via the dm-crypt kernel module. Default: "false". See Volume encryption to learn more. |
+
hostEncryptionSecretName | +Text | +Name of the Secret to use for the volume encryption. Mandatory if "hostEncryption" is enabled. Default: "" |
+
hostEncryptionSecretNamespace | +Text | +Namespace where to find "hostEncryptionSecretName". Default: "" |
+
1 = Parameter is mutable using the CSI Volume Mutator.
+Note
+All common HPE CSI Driver parameters are optional.
+Familiarize yourself with the iSCSI CHAP Considerations before proceeding. This section describes how to enable iSCSI CHAP with HPE CSI Driver 2.5.0 and later.
+Create an iSCSI CHAP Secret
. The referenced CHAP account does not need to exist on the storage backend, it will be created by the CSP if it doesn't exist.
apiVersion: v1
+kind: Secret
+metadata:
+ name: my-chap-secret
+ namespace: hpe-storage
+stringData:
+ # Up to 64 characters including \-:., must start with an alpha-numeric character.
+ chapUser: "my-chap-user"
+ # Between 12 to 16 alpha-numeric characters.
+ chapPassword: "my-chap-password"
+
+Once the Secret
has been created, there are two methods available to use it depending on the situation, cluster-wide or per StorageClass
.
The cluster-wide iSCSI CHAP credentials will be used by all iSCSI-based PersistentVolumes
regardless of backend and StorageClass
. The CHAP Secret
is simply referenced during install of the HPE CSI Driver for Kubernetes Helm Chart. The Secret
and Namespace
needs to exist prior to install.
Example:
+
helm install my-hpe-csi-driver -n hpe-storage \
+ hpe-storage/hpe-csi-driver \
+ --set iscsi.chapSecretName=my-chap-secret
+
+Important
+Once a PersistentVolume
has been provisioned with cluster-wide iSCSI CHAP credentials it's not possible to switch over to per StorageClass
iSCSI CHAP credentials.
If CSI driver 2.4.2 or earlier has been used, cluster-wide iSCSI CHAP credentials is the only way to provide the credentials for volumes provisioned with 2.4.2 or earlier.
The CHAP Secret
may be referenced in a StorageClass
.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+ name: hpe-standard
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ description: "Volume created by the HPE CSI Driver for Kubernetes"
+ chapSecretNamespace: hpe-storage
+ chapSecretName: my-chap-secret
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
+Warning
+The iSCSI CHAP credentials are in reality per iSCSI Target. Do NOT create multiple StorageClasses
referencing different CHAP Secrets
with different credentials for the same backend. It will result in a data outage with conflicting sessions.
Ensure the same Secret
is referenced in all StorageClasses
using a particular backend.
These instructions are provided as an example on how to use the HPE CSI Driver with a CSP supported by HPE.
+New to Kubernetes?
+There's a basic tutorial of how dynamic provisioning of persistent storage on Kubernetes works in the Video Gallery.
+The below YAML declarations are meant to be created with kubectl create
. Either copy the content to a file on the host where kubectl
is being executed, or copy & paste into the terminal, like this:
kubectl create -f-
+< paste the YAML >
+^D (CTRL + D)
+
+To get started, create a StorageClass
API object referencing the CSI driver Secret
relevant to the backend.
These examples are for Kubernetes 1.15+
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-scod
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ description: "Volume created by the HPE CSI Driver for Kubernetes"
+ accessProtocol: iscsi
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
+Create a PersistentVolumeClaim
. This object declaration ensures a PersistentVolume
is created and provisioned on your behalf, make sure to reference the correct .spec.storageClassName
:
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-first-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 32Gi
+ storageClassName: hpe-scod
+
+Note
+In most environments, there is a default StorageClass
declared on the cluster. In such a scenario, the .spec.storageClassName
can be omitted. The default StorageClass
is controlled by an annotation: .metadata.annotations.storageclass.kubernetes.io/is-default-class
set to either "true"
or "false"
.
After the PersistentVolumeClaim
has been declared, check that a new PersistentVolume
is created based on your claim:
kubectl get pv
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
+pvc-13336da3-7... 32Gi RWO Delete Bound default/my-first-pvc hpe-scod 3s
+
+The above output means that the HPE CSI Driver successfully provisioned a new volume. The volume is not attached to any node yet. It will only be attached to a node if a scheduled workload requests the PersistentVolumeClaim
. Now, let us create a Pod
that refers to the above volume. When the Pod
is created, the volume will be attached, formatted and mounted according to the specification.
kind: Pod
+apiVersion: v1
+metadata:
+ name: my-pod
+spec:
+ containers:
+ - name: pod-datelog-1
+ image: nginx
+ command: ["bin/sh"]
+ args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+ volumeMounts:
+ - name: export1
+ mountPath: /data
+ - name: pod-datelog-2
+ image: debian
+ command: ["bin/sh"]
+ args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+ volumeMounts:
+ - name: export1
+ mountPath: /data
+ volumes:
+ - name: export1
+ persistentVolumeClaim:
+ claimName: my-first-pvc
+
+Check if the Pod
is running successfully.
kubectl get pod my-pod
+NAME READY STATUS RESTARTS AGE
+my-pod 2/2 Running 0 2m29s
+
+Tip
+A simple Pod
does not provide any automatic recovery if the node the Pod
is scheduled on crashes or become unresponsive. Please see the official Kubernetes documentation for different workload types that provide automatic recovery. A shortlist of recommended workload types that are suitable for persistent storage is available in this blog post and best practices are outlined in this blog post.
It's possible to declare a volume "inline" a Pod
specification. The volume is ephemeral and only persists as long as the Pod
is running. If the Pod
gets rescheduled, deleted or upgraded, the volume is deleted and a new volume gets provisioned if it gets restarted.
Ephemeral inline volumes are not associated with a StorageClass
, hence a Secret
needs to be provided inline with the volume.
Warning
+Allowing user Pods
to access the CSP Secret
gives them the same privileges on the backend system as the HPE CSI Driver.
There are two ways to declare the Secret
with ephemeral inline volumes, either the Secret
is in the same Namespace
as the workload being declared or it resides in a foreign Namespace
.
Local Secret
:
apiVersion: v1
+kind: Pod
+metadata:
+ name: my-pod-inline-mount-1
+spec:
+ containers:
+ - name: pod-datelog-1
+ image: nginx
+ command: ["bin/sh"]
+ args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+ volumeMounts:
+ - name: my-volume-1
+ mountPath: /data
+ volumes:
+ - name: my-volume-1
+ csi:
+ driver: csi.hpe.com
+ nodePublishSecretRef:
+ name: hpe-backend
+ fsType: ext3
+ volumeAttributes:
+ csi.storage.k8s.io/ephemeral: "true"
+ accessProtocol: "iscsi"
+ size: "5Gi"
+
+Foreign Secret
:
apiVersion: v1
+kind: Pod
+metadata:
+ name: my-pod-inline-mount-2
+spec:
+ containers:
+ - name: pod-datelog-1
+ image: nginx
+ command: ["bin/sh"]
+ args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+ volumeMounts:
+ - name: my-volume-1
+ mountPath: /data
+ volumes:
+ - name: my-volume-1
+ csi:
+ driver: csi.hpe.com
+ fsType: ext3
+ volumeAttributes:
+ csi.storage.k8s.io/ephemeral: "true"
+ inline-volume-secret-name: hpe-backend
+ inline-volume-secret-namespace: hpe-storage
+ accessProtocol: "iscsi"
+ size: "7Gi"
+
+The parameters used in the examples are the bare minimum required parameters. Any parameters supported by the HPE CSI Driver and backend CSP may be used for ephemeral inline volumes. See the base StorageClass parameters or the respective CSP being used.
+Seealso
+For more elaborate use cases around ephemeral inline volumes, check out the tutorial on HPE Developer: Using Ephemeral Inline Volumes on Kubernetes
+The default volumeMode
for a PersistentVolumeClaim
is Filesystem
. If a raw block volume is desired, volumeMode
needs to be set to Block
. No filesystem will be created. Example:
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc-block
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 32Gi
+ storageClassName: hpe-scod
+ volumeMode: Block
+
+Note
+The accessModes
may be set to ReadWriteOnce
, ReadWriteMany
or ReadOnlyMany
. It's expected that the application handles read/write IO, volume locking and access in the event of concurrent block access from multiple nodes. Consult the Alletra 6000 CSP documentation if using ReadWriteMany
raw block volumes with FC on Nimble, Alletra 5000 or 6000.
Mapping the device in a Pod
specification is slightly different than using regular filesystems as a volumeDevices
section is added instead of a volumeMounts
stanza:
apiVersion: v1
+kind: Pod
+metadata:
+ name: my-pod-block
+spec:
+ containers:
+ - name: my-null-pod
+ image: fedora:31
+ command: ["/bin/sh", "-c"]
+ args: [ "tail -f /dev/null" ]
+ volumeDevices:
+ - name: data
+ devicePath: /dev/xvda
+ volumes:
+ - name: data
+ persistentVolumeClaim:
+ claimName: my-pvc-block
+
+Seealso
+There's an in-depth tutorial available on HPE Developer that covers raw block volumes: Using Raw Block Volumes on Kubernetes
+CSI introduces snapshots as native objects in Kubernetes that allows end-users to provision VolumeSnapshot
objects from an existing PersistentVolumeClaim
. New PVCs may then be created using the snapshot as a source.
Tip
+Ensure CSI snapshots are enabled.
+
There's a tutorial in the Video Gallery on how to use CSI snapshots and clones.
Start by creating a VolumeSnapshotClass
referencing the Secret
and defining additional snapshot parameters.
apiVersion: snapshot.storage.k8s.io/v1
+kind: VolumeSnapshotClass
+metadata:
+ name: hpe-snapshot
+ annotations:
+ snapshot.storage.kubernetes.io/is-default-class: "true"
+driver: csi.hpe.com
+deletionPolicy: Delete
+parameters:
+ description: "Snapshot created by the HPE CSI Driver"
+ csi.storage.k8s.io/snapshotter-secret-name: hpe-backend
+ csi.storage.k8s.io/snapshotter-secret-namespace: hpe-storage
+ csi.storage.k8s.io/snapshotter-list-secret-name: hpe-backend
+ csi.storage.k8s.io/snapshotter-list-secret-namespace: hpe-storage
+
+Note
+Container Storage Providers may have optional parameters to the VolumeSnapshotClass
.
Create a VolumeSnapshot
. This will create a new snapshot of the volume.
apiVersion: snapshot.storage.k8s.io/v1
+kind: VolumeSnapshot
+metadata:
+ name: my-snapshot
+spec:
+ source:
+ persistentVolumeClaimName: my-pvc
+
+Tip
+If a specific VolumeSnapshotClass
is desired, use .spec.volumeSnapshotClassName
to call it out.
Check that a new VolumeSnapshot
is created based on your claim:
kubectl describe volumesnapshot my-snapshot
+Name: my-snapshot
+Namespace: default
+...
+Status:
+ Creation Time: 2019-05-22T15:51:28Z
+ Ready: true
+ Restore Size: 32Gi
+
+It's now possible to create a new PersistentVolumeClaim
from the VolumeSnapshot
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc-from-snapshot
+spec:
+ dataSource:
+ name: my-snapshot
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 32Gi
+
+Important
+The size in .spec.resources.requests.storage
must match the .spec.dataSource
size.
The .data.dataSource
attribute may also clone PersistentVolumeClaim
directly, without creating a VolumeSnapshot
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc-from-pvc
+spec:
+ dataSource:
+ name: my-pvc
+ kind: PersistentVolumeClaim
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 32Gi
+
+Again, the size in .spec.resources.requests.storage
must match the source PersistentVolumeClaim
. This can get sticky from an automation perspective as volume expansion is being used on the source volume. It's recommended to inspect the source PersistentVolumeClaim
or VolumeSnapshot
size prior to creating a clone.
Learn more
+For a more comprehensive tutorial on how to use snapshots and clones with CSI on Kubernetes 1.17, see HPE CSI Driver for Kubernetes: Snapshots, Clones and Volume Expansion on HPE Developer.
+PersistentVolumeClaims
created in a particular Namespace
from the same storage backend may be grouped together in a VolumeGroup
. A VolumeGroup
is what may be known as a "consistency group" in other storage infrastructure systems. This allows certain attributes to be managed on a abstract group and attributes then applies to all member volumes in the group instead of managing each volume individually. One such aspect is creating snapshots with referential integrity between volumes or setting a performance attribute that would have accounting made on the logical group rather than the individual volume.
Tip
+A tutorial on how to use VolumeGroups
and SnapshotGroups
is available in the Video Gallery.
Before grouping PeristentVolumeClaims
there needs to be a VolumeGroupClass
created. It needs to reference a Secret
that corresponds to the same backend the PersistentVolumeClaims
were created on. A VolumeGroupClass
is a cluster resource that needs administrative privileges to create.
apiVersion: storage.hpe.com/v1
+kind: VolumeGroupClass
+metadata:
+ name: my-volume-group-class
+provisioner: csi.hpe.com
+deletionPolicy: Delete
+parameters:
+ description: "HPE CSI Driver for Kubernetes Volume Group"
+ csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend
+ csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage
+
+Note
+The VolumeGroupClass
.parameters
may contain CSP specifc parameters. Check the documentation of the Container Storage Provider being used.
Once the VolumeGroupClass
is in place, users may create VolumeGroups
. The VolumeGroups
are just like PersistentVolumeClaims
part of a Namespace
and both resources need to be in the same Namespace
for the grouping to be successful.
apiVersion: storage.hpe.com/v1
+kind: VolumeGroup
+metadata:
+ name: my-volume-group
+spec:
+ volumeGroupClassName: my-volume-group-class
+
+Depending on the CSP being used, the VolumeGroup
may reference an object that corresponds to the Kubernetes API object. It's not until users annotates their PersistentVolumeClaims
the VolumeGroup
gets populated.
Adding a PersistentVolumeClaim
to a VolumeGroup
:
kubectl annotate pvc/my-pvc csi.hpe.com/volume-group=my-volume-group
+
+Removing a PersistentVolumeClaim
from a VolumeGroup
:
kubectl annotate pvc/my-pvc csi.hpe.com/volume-group-
+
+Tip
+While adding the PersistentVolumeClaim
to the VolumeGroup
is instant, removal require one reconciliation loop and might not immediately be reflected on the VolumeGroup
object.
Being able to create snapshots of the VolumeGroup
require the CSI external-snapshotter to be installed and also require a VolumeSnapshotClass
configured using the same storage backend as the VolumeGroup
. Once those pieces are in place, a SnapshotGroupClass
needs to be created. SnapshotGroupClasses
are cluster objects created by an administrator.
apiVersion: storage.hpe.com/v1
+kind: SnapshotGroupClass
+metadata:
+ name: my-snapshot-group-class
+snapshotter: csi.hpe.com
+deletionPolicy: Delete
+parameters:
+ csi.hpe.com/snapshot-group-snapshotter-secret-name: hpe-backend
+ csi.hpe.com/snapshot-group-snapshotter-secret-namespace: hpe-storage
+
+Creating a SnapshotGroup
is later performed using the VolumeGroup
as a source while referencing a SnapshotGroupClass
and a VolumeSnapshotClass
.
apiVersion: storage.hpe.com/v1
+kind: SnapshotGroup
+metadata:
+ name: my-snapshot-group-1
+spec:
+ source:
+ kind: VolumeGroup
+ apiGroup: storage.hpe.com
+ name: my-volume-group
+ snapshotGroupClassName: my-snapshot-group-class
+ volumeSnapshotClassName: hpe-snapshot
+
+Once the SnapshotGroup
has been successfully created, the individual VolumeSnapshots
are now available in the Namespace
.
List VolumeSnapshots
:
kubectl get volumesnapshots
+
+If no VolumeSnapshots
are being enumerated, check the diagnostics on how to check the component logs and such.
New feature!
+Volume Groups and Snapshot Groups got introduced in HPE CSI Driver for Kubernetes 1.4.0.
+To perform expansion operations on Kubernetes 1.14+, you must enhance your StorageClass
with the .allowVolumeExpansion: true
key. Please see base StorageClass
parameters for additional information.
Then, a volume provisioned by a StorageClass
with expansion attributes may have its PersistentVolumeClaim
expanded by altering the .spec.resources.requests.storage
key of the PersistentVolumeClaim
.
This may be done by the kubectl patch
command.
kubectl patch pvc/my-pvc --patch '{"spec": {"resources": {"requests": {"storage": "64Gi"}}}}'
+persistentvolumeclaim/my-pvc patched
+
+The new PersistentVolumeClaim
size may be observed with kubectl get pvc/my-pvc
after a few moments.
The HPE CSI Driver allows the PersistentVolumeClaim
to override the StorageClass
parameters by annotating the PersistentVolumeClaim
. Define the parameters allowed to be overridden in the StorageClass
by setting the allowOverrides
parameter:
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-scod-override
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ description: "Volume provisioned by the HPE CSI Driver"
+ accessProtocol: iscsi
+ allowOverrides: description,accessProtocol
+
+The end-user may now control those parameters (the StorageClass
provides the default values).
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc-override
+ annotations:
+ csi.hpe.com/description: "This is my custom description"
+ csi.hpe.com/accessProtocol: fc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Gi
+ storageClassName: hpe-scod-override
+
+The HPE CSI Driver (version 1.3.0 and later) allows the CSP backend volume to be mutated by annotating the PersistentVolumeClaim
. Define the parameters allowed to be mutated in the StorageClass
by setting the allowMutations
parameter.
Tip
+There's a tutorial available on YouTube accessible through the Video Gallery on how to use volume mutations to adapt stateful workloads with the HPE CSI Driver.
+Important
+In order to mutate a StorageClass
parameter it needs to have a default value set in the StorageClass
. In the example below we'll allow mutatating "description". If the parameter "description" wasn't set when the PersistentVolume
was provisioned, no subsequent mutations are allowed. The CSP may set defaults for certain parameters during provisioning, if those are mutable, the mutation will be performed.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-scod-mutation
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ description: "Volume provisioned by the HPE CSI Driver"
+ allowMutations: description
+
+Note
+The allowMutations
parameter is a comma separated list of values defined by each of the CSPs parameters, except the description
parameter, which is common across all CSPs. See the documentation for each CSP on what parameters are mutable.
The end-user may now control those parameters by editing or patching the PersistentVolumeClaim
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc-mutation
+ annotations:
+ csi.hpe.com/description: "My description needs to change"
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Gi
+ storageClassName: hpe-scod-mutation
+
+Good to know
+As the .spec.csi.volumeAttributes
on the PersistentVolume
are immutable, the mutations performed on the backend volume are also annotated on the PersistentVolume
object.
Enabling the NFS Server Provisioner to allow "ReadWriteMany" and "ReadOnlyMany" access mode for a PVC
is straightforward. Create a new StorageClass
and set .parameters.nfsResources
to "true"
. Any subsequent claim to the StorageClass
will create a NFS server Deployment
on the cluster with the associated objects running on top of a "ReadWriteOnce" PVC
.
Any "RWO" claim made against the StorageClass
will also create a NFS server Deployment
. This allows diverse connectivity options among the Kubernetes worker nodes as the HPE CSI Driver will look for nodes labelled csi.hpe.com/hpe-nfs=true
(or using a custom value specified in .parameters.nfsNodeSelector
) before submitting the workload for scheduling. This allows dedicated NFS worker nodes without user workloads using taints and tolerations. The NFS server Pod
is armed with a csi.hpe.com/hpe-nfs
toleration. It's required to taint dedicated NFS worker nodes if they truly need to be dedicated.
By default, the NFS Server Provisioner deploy resources in the "hpe-nfs" Namespace
. This makes it easy to manage and diagnose. However, to use CSI data management capabilities (VolumeSnapshots
and .spec.dataSource
) on the PVCs, the NFS resources need to be deployed in the same Namespace
as the "RWX"/"ROX" requesting PVC
. This is controlled by the nfsNamespace
StorageClass
parameter. See base StorageClass
parameters for more information.
Tip
+A comprehensive tutorial is available on HPE Developer on how to get started with the NFS Server Provisioner and the HPE CSI Driver for Kubernetes. There's also a brief tutorial available in the Video Gallery.
+Example StorageClass
with "nfsResources" enabled. No CSP specific parameters for clarity.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-standard-file
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ description: "NFS backend volume created by the HPE CSI Driver for Kubernetes"
+ csi.storage.k8s.io/fstype: ext4
+ nfsResources: "true"
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
+Note
+Using XFS may result in stale NFS handles during node failures and outages. Always use ext4 for NFS PVCs
. While "allowVolumeExpansion" isn't supported on the NFS PVC
, the backend "RWO" PVC
does.
Example use of accessModes
:
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-rwo-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 32Gi
+ storageClassName: hpe-nfs
+
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-rwx-pvc
+spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 32Gi
+ storageClassName: hpe-nfs
+
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-rox-pvc
+spec:
+ accessModes:
+ - ReadOnlyMany
+ resources:
+ requests:
+ storage: 32Gi
+ storageClassName: hpe-nfs
+
In the case of declaring a "ROX" PVC
, the requesting Pod
specification needs to request the PVC
as read-only. Example:
apiVersion: v1
+kind: Pod
+metadata:
+ name: pod-rox
+spec:
+ containers:
+ - image: busybox
+ name: busybox
+ command:
+ - "sleep"
+ - "300"
+ volumeMounts:
+ - mountPath: /data
+ name: my-vol
+ readOnly: true
+ volumes:
+ - name: my-vol
+ persistentVolumeClaim:
+ claimName: my-rox-pvc
+ readOnly: true
+
+Requesting an empty read-only volume might not seem practical. The primary use case is to source existing datasets into immutable applications, using either a backend CSP cloning capability or CSI data management feature such as snapshots or existing PVCs.
+Since HPE CSI Driver for Kubernetes version 2.4.1 it's possible to provision NFS servers on top of non-HPE CSI Driver StorageClasses
. The most prominent use case for this functionality is to coexist with the vSphere CSI Driver (VMware vSphere Container Storage Plug-in) in FC environments and provide "RWX" PVCs
.
The HPE CSI Driver only manages the NFS server Deployment
, Service
and PVC
. There must be an existing StorageClass
capable of provisioning "RWO" filesystem PVCs
.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-nfs-servers
+provisioner: csi.hpe.com
+parameters:
+ nfsResources: "true"
+ nfsForeignStorageClass: "my-foreign-storageclass-name"
+reclaimPolicy: Delete
+allowVolumeExpansion: false
+
+Next, provision "RWO" or "RWX" claims from the "hpe-nfs-servers" StorageClass
. An NFS server will be provisioned on a "RWO" PVC
from the StorageClass
"my-foreign-storageclass-name".
Note
+Only StorageClasses
that uses HPE storage proxied by partner CSI drivers are supported by HPE.
These are some common issues and gotchas that are useful to know about when planning to use the NFS Server Provisioner.
+StorageClass
parameters "nfsResourceLimitsCpuM" and "nfsResourceLimitsMemoryMi" control how much CPU and memory it may consume. Tests show that the NFS server consumes about 150MiB at instantiation and 2GiB is the recommended minimum for most workloads. The NFS server Pod
is by default limited to 2GiB of memory and 1000 milli CPU.PVC
can NOT be expanded. If more capacity is needed, expand the "ReadWriteOnce" PVC
backing the NFS Server Provisioner. This will result in inaccurate space reporting.PVC
, provisioning times may differ greatly between clusters. On an idle cluster with the NFS Server Provisioning image cached, less than 30 seconds is the most common sighting but it may exceed 30 seconds which may trigger warnings on the requesting PVC
. This is normal behavior.Pods
that have become unavailable due to the Pod status changing to NodeLost
or a node becoming unreachable that the Pod
runs on. By default the Pod Monitor only watches the NFS Server Provisioner Deployments
. It may be used for any Deployment
. See Pod Monitor on how to use it, especially the limitations.PVC
. If changes are needed, perform the change on the backing "ReadWriteOnce" PVC
.PVCs
requires the CSI snapshot and NFS server to reside in the same Namespace
. This also applies when using third-party backup software such as Kasten K10. Use the "nfsNamespace" StorageClass
parameter to control where to provision resources.PVC
. The "volume-group" annotation may be set at the initial creation of the NFS PVC
but will have adverse effect on logging as the Volume Group Provisioner tries to add the NFS PVC
to the backend consistency group indefinitely.StorageClass
parameter for best results.Service
. It is possible to expose the NFS servers outside the cluster for external NFS clients. Understand the scope and limitations in Auxillary Operations.See diagnosing NFS Server Provisioner issues for further details.
+From version 2.0.0 and onwards of the CSI driver supports host-based volume encryption for any of the CSPs supported by the CSI driver.
+Host-based volume encryption is controlled by StorageClass
parameters configured by the Kubernetes administrator and may be configured to be overridden by Kubernetes users. In the below example, a single Secret
is used to encrypt and decrypt all volumes provisioned by the StorageClass
.
First, create a Secret
, in this example we'll use the "hpe-storage" Namespace
.
apiVersion: v1
+kind: Secret
+metadata:
+ name: my-passphrase
+ namespace: hpe-storage
+stringData:
+ hostEncryptionPassphrase: "HPE CSI Driver for Kubernetes 2.0.0 Rocks!"
+
+Tip
+The "hostEncryptionPassphrase" can be up to 512 characters.
+Next, incorporate the Secret
into a StorageClass
.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-encrypted
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ description: "Volume provisioned by the HPE CSI Driver"
+ hostEncryption: "true"
+ hostEncryptionSecretName: my-passphrase
+ hostEncryptionSecretNamespace: hpe-storage
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
+Next, create a PersistentVolumeClaim
that uses the "hpe-encrypted" StorageClass
:
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-encrypted-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Gi
+ storageClassName: hpe-encrypted
+
+Attach a basic Pod
to verify functionality.
kind: Pod
+apiVersion: v1
+metadata:
+ name: my-pod
+spec:
+ containers:
+ - name: pod-datelog-1
+ image: nginx
+ command: ["bin/sh"]
+ args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+ volumeMounts:
+ - name: export1
+ mountPath: /data
+ - name: pod-datelog-2
+ image: debian
+ command: ["bin/sh"]
+ args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+ volumeMounts:
+ - name: export1
+ mountPath: /data
+ volumes:
+ - name: export1
+ persistentVolumeClaim:
+ claimName: my-encrypted-pvc
+
+Once the Pod
comes up, verify that the volume is encrypted.
$ kubectl exec -it my-pod -c pod-datelog-1 -- df -h /data
+Filesystem Size Used Avail Use% Mounted on
+/dev/mapper/enc-mpatha 100G 33M 100G 1% /data
+
+Host-based volume encryption is in effect if the "enc" prefix is seen on the multipath device name.
+Seealso
+For an in-depth tutorial and more advanced use cases for host-based volume encryption, check out this blog post on HPE Developer: Host-based Volume Encryption with HPE CSI Driver for Kubernetes
+With CSI driver v2.5.0 and newer, basic CSI topology information can be associated with a single backend from a StorageClass
. For backwards compatibility, only volumeBindingMode: WaitForFirstConsumer
require topology labels assigned to compute nodes. Using the default volumeBindingMode
of Immediate
will preserve the behavior prior to v2.5.0.
Tip
+The "csi-provisioner" is deployed with --feature-gates Topology=true
and --immediate-topology=false
. It's impact on volume provisioning and accessibility can be found here.
Assume a simple use case where only a handful of nodes in a Kubernetes cluster have Fibre Channel adapters installed. Workloads with persistent storage requirements from a particular StorageClass
should be deployed onto those nodes only.
Nodes with the label csi.hpe.com/zone
are considered during topology accessibility assessments. Assume three nodes in the cluster have FC adapters.
kubectl label node/my-node{1..3} csi.hpe.com/zone=fc --overwrite
+
+If the CSI driver is already installed on the cluster, the CSI node driver needs to be restarted for the node labels to propagate.
+
kubectl rollout restart -n hpe-storage ds/hpe-csi-node
+
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+ name: hpe-standard-fc
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ description: "Volume created by the HPE CSI Driver for Kubernetes"
+ accessProtocol: fc
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+volumeBindingMode: WaitForFirstConsumer
+allowedTopologies:
+- matchLabelExpressions:
+ - key: csi.hpe.com/zone
+ values:
+ - fc
+
+Any workload provisioning PVCs
from the above StorageClass
will now be scheduled on nodes labeled csi.hpe.com/zone=fc
.
Note
+The allowedTopologies
key may be omitted if there's only a single topology applied to a subset of nodes. The nodes always need to be labeled when using volumeBindingMode: WaitForFirstConsumer
. If all nodes have access to a backend, set volumeBindingMode: Immediate
and omit allowedTopologies
.
How to map an existing backend volume to a PersistentVolume
differs between the CSP implementations.
The official Kubernetes documentation contains comprehensive documentation on how to markup PersistentVolumeClaim
and StorageClass
API objects to tweak certain behaviors.
Each CSP has a set of unique StorageClass
parameters that may be tweaked to accommodate a wide variety of use cases. Please see the documentation of the respective CSP for more details.
Expired content
+The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.
+This is the documentation for HPE Cloud Volumes Plugin for Docker. It allows dynamic provisioning of Docker Volumes on standalone Docker Engine or Docker Swarm nodes.
+Plugin | +Release Notes | +
---|---|
3.1.0 | +v3.1.0 | +
Note
+Docker does not support certified and managed Docker Volume plugins with Docker EE Kubernetes. If you want to use Kubernetes on Docker EE with HPE Nimble Storage, please use the HPE Volume Driver for Kubernetes FlexVolume Plugin or the HPE CSI Driver for Kubernetes depending on your situation.
+HPE Cloud Volumes provides a Docker certified plugin delivered through the Docker Store. Certain features and capabilities are not available through the managed plugin. Please understand these limitations before deploying either of these plugins.
+The managed plugin does NOT provide:
+The managed plugin does provide a simple way to manage HPE Cloud Volumes integration on your Docker instances using Docker's interface to install and manage the plugin.
+In order to create connections, attach devices and mount file systems, the plugin requires more privileges than a standard application container. These privileges are enumerated during installation. These permissions need to be granted for the plugin to operate correctly.
+
Plugin "cvblock" is requesting the following privileges:
+ - network: [host]
+ - mount: [/dev]
+ - mount: [/run/lock]
+ - mount: [/sys]
+ - mount: [/etc]
+ - mount: [/var/lib]
+ - mount: [/var/run/docker.sock]
+ - mount: [/sbin/iscsiadm]
+ - mount: [/lib/modules]
+ - mount: [/usr/lib64]
+ - allow-all-devices: [true]
+ - capabilities: [CAP_SYS_ADMIN CAP_SYS_MODULE CAP_MKNOD]
+
+Setting up the plugin varies between Linux distributions.
+These procedures requires root privileges on the cloud instance.
+Red Hat 7.5+, CentOS 7.5+:
+
yum install -y iscsi-initiator-utils device-mapper-multipath
+docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0
+docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME=<access_key> PROVIDER_PASSWORD=<access_secret>
+docker plugin enable cvblock
+systemctl daemon-reload
+systemctl enable iscsid multipathd
+systemctl start iscsid multipathd
+
+Ubuntu 16.04 LTS and Ubuntu 18.04 LTS:
+
apt-get install -y open-iscsi multipath-tools xfsprogs
+modprobe xfs
+sed -i"" -e "\$axfs" /etc/modules
+docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0
+docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME=<access_key> PROVIDER_PASSWORD=<access_secret> glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable cvblock
+systemctl daemon-reload
+systemctl restart open-iscsi multipath-tools
+
+Debian 9.x (stable):
+
apt-get install -y open-iscsi multipath-tools xfsprogs
+modprobe xfs
+sed -i"" -e "\$axfs" /etc/modules
+docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0
+docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME=<access_key> PROVIDER_PASSWORD=<access_secret> iscsiadm.source=/usr/bin/iscsiadm glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable cvblock
+systemctl daemon-reload
+systemctl restart open-iscsi multipath-tools
+
+The docker plugin set
command can only be used on the plugin if it is disabled. To disable the plugin, use the docker plugin disable
command. For example:
docker plugin disable cvblock
+
+List of parameters which are supported to be settable by the plugin
+Parameter | +Description | +Default | +
---|---|---|
PROVIDER_IP |
+HPE Cloud Volumes portal | +"" |
+
PROVIDER_USERNAME |
+HPE Cloud Volumes username | +"" |
+
PROVIDER_PASSWORD |
+HPE Cloud Volumes password | +"" |
+
PROVIDER_REMOVE |
+Unassociate Plugin from HPE Cloud Volumes | +false |
+
LOG_LEVEL |
+Log level of the plugin (info , debug , or trace ) |
+debug |
+
SCOPE |
+Scope of the plugin (global or local ) |
+global |
+
In the event of reassociating the plugin with a different HPE Cloud Volumes portal, certain procedures need to be followed:
+Disable the plugin
+
docker plugin disable cvblock
+
+Set new paramters
+
docker plugin set cvblock PROVIDER_REMOVE=true
+
+Enable the plugin
+
docker plugin enable cvblock
+
+Disable the plugin
+
docker plugin disable cvblock
+
+The plugin is now ready for re-configuration
+
docker plugin set cvblock PROVIDER_IP=< New portal address > PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin PROVIDER_REMOVE=false
+
+Note
+The PROVIDER_REMOVE=false
parameter must be set if the plugin ever has been unassociated from a HPE Cloud Volumes portal.
The configuration directory for the plugin is /etc/hpe-storage
on the host. Files in this directory are preserved between plugin upgrades. The /etc/hpe-storage/volume-driver.json
file contains three sections, global
, defaults
and overrides
. The global options are plugin runtime parameters and doesn't have any end-user configurable keys at this time.
The defaults
map allows the docker host administrator to set default options during volume creation. The docker user may override these default options with their own values for a specific option.
The overrides
map allows the docker host administrator to enforce a certain option for every volume creation. The docker user may not override the option and any attempt to do so will be silently ignored.
Note
+defaults
and overrides
are dynamically read during runtime while global
changes require a plugin restart.
Example config file in /etc/hpe-storage/volume-driver.json
:
{
+ "global": {
+ "snapPrefix": "BaseFor",
+ "initiators": ["eth0"],
+ "automatedConnection": true,
+ "existingCloudSubnet": "10.1.0.0/24",
+ "region": "us-east-1",
+ "privateCloud": "vpc-data",
+ "cloudComputeProvider": "Amazon AWS"
+ },
+ "defaults": {
+ "limitIOPS": 1000,
+ "fsOwner": "0:0",
+ "fsMode": "600",
+ "description": "Volume provisioned by the HPE Volume Driver for Kubernetes FlexVolume Plugin",
+ "perfPolicy": "Other",
+ "protectionTemplate": "twicedaily:4",
+ "encryption": true,
+ "volumeType": "PF",
+ "destroyOnRm": true
+ },
+ "overrides": {
+ }
+}
+
+For an exhaustive list of options use the help
option from the docker CLI:
$ docker volume create -d cvblock -o help
+
+If you are considering using any Docker clustering technologies for your Docker deployment, it is important to understand the fencing mechanism used to protect data. Attaching the same Docker Volume to multiple containers on the same host is fully supported. Mounting the same volume on multiple hosts is not supported.
+Docker does not provide a fencing mechanism for nodes that have become disconnected from the Docker Swarm. This results in the isolated nodes continuing to run their containers. When the containers are rescheduled on a surviving node, the Docker Engine will request that the Docker Volume(s) be mounted. In order to prevent data corruption, the Docker Volume Plugin will stop serving the Docker Volume to the original node before mounting it on the newly requested node.
+During a mount request, the Docker Volume Plugin inspects the ACR (Access Control Record) on the volume. If the ACR does not match the initiator requesting to mount the volume, the ACR is removed and the volume taken offline. The volume is now fenced off and other nodes are unable to access any data in the volume.
+The volume then receives a new ACR matching the requesting initiator, and it is mounted for the container requesting the volume. This is done because the volumes are formatted with XFS, which is not a clustered filesystem and can be corrupted if the same volume is mounted to multiple hosts.
+The side effect of a fenced node is that I/O hangs indefinitely, and the initiator is rejected during login. If the fenced node rejoins the Docker Swarm using Docker SwarmKit, the swarm tries to shut down the services that were rescheduled elsewhere to maintain the desired replica set for the service. This operation will also hang indefinitely waiting for I/O.
+We recommend running a dedicated Docker host that does not host any other critical applications besides the Docker Engine. Doing this supports a safe way to reboot a node after a grace period and have it start cleanly when a hung task is detected. Otherwise, the node can remain in the hung state indefinitely.
+The following kernel parameters control the system behavior when a hung task is detected:
+
# Reset after these many seconds after a panic
+kernel.panic = 5
+
+# I do consider hung tasks reason enough to panic
+kernel.hung_task_panic = 1
+
+# To not panic in vain, I'll wait these many seconds before I declare a hung task
+kernel.hung_task_timeout_secs = 150
+
+Add these parameters to the /etc/sysctl.d/99-hung_task_timeout.conf
file and reboot the system.
Important
+Docker SwarmKit declares a node as failed after five (5) seconds. Services are then rescheduled and up and running again in less than ten (10) seconds. The parameters noted above provide the system a way to manage other tasks that may appear to be hung and avoid a system panic.
+These are some basic examples on how to use the HPE Cloud Volumes Plugin for Docker.
+Using docker volume create
.
Note
+The plugin applies a set of default options when you create
new volumes unless you override them using the volume create -o key=value
option flags.
Create a Docker volume with a custom description:
+
docker volume create -d cvblock -o description="My volume description" --name myvol1
+
+(Optional) Inspect the new volume:
+
docker volume inspect myvol1
+
+(Optional) Attach the volume to an interactive container.
+
docker run -it --rm -v myvol1:/data bash
+
+The volume is mounted inside the container on /data
.
Use the docker volume create
command with the cloneOf
option to clone a Docker volume to a new Docker volume.
Clone the Docker volume named myvol1
to a new Docker volume named myvol1-clone
.
docker volume create -d cvblock -o cloneOf=myvol1 --name=myvol1-clone
+
+(Optional) Select a snapshot on which to base the clone.
+
docker volume create -d cvblock -o snapshot=mysnap1 -o cloneOf=myvol1 --name=myvol2-clone
+
+There are several ways to provision a Docker volume depending on what tools are used:
+The Docker Volume plugin leverages the existing Docker CLI and APIs, therefor all native Docker tools may be used to provision a volume.
+Note
+The plugin applies a set of default volume create options. Unless you override the default options using the volume option flags, the defaults are applied when you create volumes. For example, the default volume size is 10GiB.
+Config file volume-driver.json
, which is stored at /etc/hpe-storage/volume-driver.json
:
{
+ "global": {},
+ "defaults": {
+ "sizeInGiB":"10",
+ "limitIOPS":"-1",
+ "limitMBPS":"-1",
+ "perfPolicy": "DockerDefault",
+ },
+ "overrides":{}
+}
+
+Take the volume you want to import offline before importing it. For information about how to take a volume offline, refer to the HPE Cloud Volumes documentation. Use the create
command with the importVol
option to import an HPE Cloud Volume to Docker and name it.
Import the HPE Cloud Volume named mycloudvol
as a Docker volume named myvol3-imported
.
docker volume create –d cvblock -o importVol=mycloudvol --name=myvol3-imported
+
+Use the create command with the importVolAsClone
option to import a HPE Cloud Volume snapshot as a Docker volume. Optionally, specify a particular snapshot on the HPE Cloud Volume using the snapshot option.
Import the HPE Cloud Volumes snapshot aSnapshot
on the volume importMe
as a Docker volume named importedSnap
.
docker volume create -d cvblock -o importVolAsClone=mycloudvol -o snapshot=mysnap1 --name=myvol4-clone
+
+Note
+If no snapshot is specified, the latest snapshot on the volume is imported.
+It's important that the volume to be restored is in an offline state on the array.
+If the volume snapshot is not specified, the last volume snapshot is used.
+
docker volume create -d cvblock -o importVol=myvol1.docker -o forceImport -o restore -o snapshot=mysnap1 --name=myvol1-restored
+
+List Docker volumes.
+
docker volume ls
+DRIVER VOLUME NAME
+cvblock:latest myvol1
+cvblock:latest myvol1-clone
+
+When you remove volumes from Docker control they are set to the offline state on the array. Access to the volumes and related snapshots using the Docker Volume plugin can be reestablished.
+Note
+To delete volumes from the HPE Cloud Volumes portal using the remove command, the volume should have been created with a -o destroyOnRm
flag.
Important: Be aware that when this option is set to true, volumes and all related snapshots are deleted from the group, and can no longer be accessed by the Docker Volume plugin.
+Remove the volume named myvol1
.
docker volume rm myvol1
+
+The plugin can be removed using the docker plugin rm
command. This command will not remove the configuration directory (/etc/hpe-storage/
).
docker plugin rm cvblock
+
+The config directory is at /etc/hpe-storage/
. When a plugin is installed and enabled, the HPE Cloud Volumes certificates are created in the config directory.
ls -l /etc/hpe-storage/
+total 16
+-r-------- 1 root root 1159 Aug 2 00:20 container_provider_host.cert
+-r-------- 1 root root 1671 Aug 2 00:20 container_provider_host.key
+-r-------- 1 root root 1521 Aug 2 00:20 container_provider_server.cert
+
+Additionally there is a config file volume-driver.json
present at the same location. This file can be edited
+to set default parameters for create volumes for docker.
The docker plugin logs are located at /var/log/hpe-docker-plugin.log
Expired content
+The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.
+This is the documentation for HPE Nimble Storage Volume Plugin for Docker. It allows dynamic provisioning of Docker Volumes on standalone Docker Engine or Docker Swarm nodes.
+Plugin | +HPE Nimble Storage Version | +Release Notes | +
---|---|---|
3.0.0 | +5.0.8.x and 5.1.3.x onwards | +v3.0.0 | +
3.1.0 | +5.0.8.x and 5.1.3.x onwards | +v3.1.0 | +
Note
+Docker does not support certified and managed Docker Volume plugins with Docker EE Kubernetes. If you want to use Kubernetes on Docker EE with HPE Nimble Storage, please use the HPE Volume Driver for Kubernetes FlexVolume Plugin or the HPE CSI Driver for Kubernetes depending on your situation.
+HPE Nimble Storage provides a Docker certified plugin delivered through the Docker Store. HPE Nimble Storage also provides a Docker Volume plugin for Windows Containers, it's available on HPE InfoSight along with its documentation. Certain features and capabilities are not available through the managed plugin. Please understand these limitations before deploying either of these plugins.
+The managed plugin does NOT provide:
+The managed plugin does provide a simple way to manage HPE Nimble Storage on your Docker hosts using Docker's interface to install and manage the plugin.
+In order to create connections, attach devices and mount file systems, the plugin requires more privileges than a standard application container. These privileges are enumerated during installation. These permissions need to be granted for the plugin to operate correctly.
+
Plugin "nimble" is requesting the following privileges:
+ - network: [host]
+ - mount: [/dev]
+ - mount: [/run/lock]
+ - mount: [/sys]
+ - mount: [/etc]
+ - mount: [/var/lib]
+ - mount: [/var/run/docker.sock]
+ - mount: [/sbin/iscsiadm]
+ - mount: [/lib/modules]
+ - mount: [/usr/lib64]
+ - allow-all-devices: [true]
+ - capabilities: [CAP_SYS_ADMIN CAP_SYS_MODULE CAP_MKNOD]
+
+Setting up the plugin varies between Linux distributions. The following workflows have been tested using a Nimble iSCSI group array at 192.168.171.74 with PROVIDER_USERNAME
admin and PROVIDER_PASSWORD
admin:
These procedures require root privileges.
+Red Hat 7.5+, CentOS 7.5+:
+
yum install -y iscsi-initiator-utils device-mapper-multipath
+docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0
+docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin
+docker plugin enable nimble
+systemctl daemon-reload
+systemctl enable iscsid multipathd
+systemctl start iscsid multipathd
+
+Ubuntu 16.04 LTS and Ubuntu 18.04 LTS:
+
apt-get install -y open-iscsi multipath-tools xfsprogs
+modprobe xfs
+sed -i"" -e "\$axfs" /etc/modules
+docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0
+docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable nimble
+systemctl daemon-reload
+systemctl restart open-iscsi multipath-tools
+
+Debian 9.x (stable):
+
apt-get install -y open-iscsi multipath-tools xfsprogs
+modprobe xfs
+sed -i"" -e "\$axfs" /etc/modules
+docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0
+docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin iscsiadm.source=/usr/bin/iscsiadm glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable nimble
+systemctl daemon-reload
+systemctl restart open-iscsi multipath-tools
+
+NOTE: To use the plugin on Fibre Channel environments use the PROTOCOL=FC
environment variable.
The docker plugin set
command can only be used on the plugin if it is disabled. To disable the plugin, use the docker plugin disable
command. For example:
docker plugin disable nimble
+
+List of parameters which are supported to be settable by the plugin.
+Parameter | +Description | +Default | +
---|---|---|
PROVIDER_IP |
+HPE Nimble Storage array ip | +"" |
+
PROVIDER_USERNAME |
+HPE Nimble Storage array username | +"" |
+
PROVIDER_PASSWORD |
+HPE Nimble Storage array password | +"" |
+
PROVIDER_REMOVE |
+Unassociate Plugin from HPE Nimble Storage array | +false |
+
LOG_LEVEL |
+Log level of the plugin (info , debug , or trace ) |
+debug |
+
SCOPE |
+Scope of the plugin (global or local ) |
+global |
+
PROTOCOL |
+Scsi protocol supported by the plugin (iscsi or fc ) |
+iscsi |
+
The HPE Nimble Storage credentials are visible to any user who can execute docker plugin inspect nimble
. To limit credential visibility, the variables should be unset after certificates have been generated. The following set of steps can be used to accomplish this:
Add the credentials
+
docker plugin set PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin
+
+Start the plugin
+
docker plugin enable nimble
+
+Stop the plugin
+
docker plugin disable nimble
+
+Remove the credentials
+
docker plugin set nimble PROVIDER_USERNAME="true" PROVIDER_PASSWORD="true"
+
+Start the plugin
+
docker plugin enable nimble
+
+Note
+Certificates are stored in /etc/hpe-storage/
on the host and will be preserved across plugin updates.
In the event of reassociating the plugin with a different HPE Nimble Storage group, certain procedures need to be followed:
+Disable the plugin
+
docker plugin disable nimble
+
+Set new paramters
+
docker plugin set nimble PROVIDER_REMOVE=true
+
+Enable the plugin
+
docker plugin enable nimble
+
+Disable the plugin
+
docker plugin disable nimble
+
+The plugin is now ready for re-configuration
+
docker plugin set nimble PROVIDER_IP=< New IP address > PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin PROVIDER_REMOVE=false
+
+Note: The PROVIDER_REMOVE=false
parameter must be set if the plugin ever has been unassociated from a HPE Nimble Storage group.
The configuration directory for the plugin is /etc/hpe-storage
on the host. Files in this directory are preserved between plugin upgrades. The /etc/hpe-storage/volume-driver.json
file contains three sections, global
, defaults
and overrides
. The global options are plugin runtime parameters and doesn't have any end-user configurable keys at this time.
The defaults
map allows the docker host administrator to set default options during volume creation. The docker user may override these default options with their own values for a specific option.
The overrides
map allows the docker host administrator to enforce a certain option for every volume creation. The docker user may not override the option and any attempt to do so will be silently ignored.
These maps are essential to discuss with the HPE Nimble Storage administrator. A common pattern is that a default protection template is selected for all volumes to fulfill a certain data protection policy enforced by the business it's serving. Another useful option is to override the volume placement options to allow a single HPE Nimble Storage array to provide multi-tenancy for docker environments.
+Note: defaults
and overrides
are dynamically read during runtime while global
changes require a plugin restart.
Below is an example /etc/hpe-storage/volume-driver.json
outlining the above use cases:
{
+ "global": {
+ "nameSuffix": ".docker"
+ },
+ "defaults": {
+ "description": "Volume provisioned by Docker",
+ "protectionTemplate": "Retain-90Daily"
+ },
+ "overrides": {
+ "folder": "docker-prod"
+ }
+}
+
+For an exhaustive list of options use the help
option from the docker CLI:
$ docker volume create -d nimble -o help
+Nimble Storage Docker Volume Driver: Create Help
+Create or Clone a Nimble Storage backed Docker Volume or Import an existing
+Nimble Volume or Clone of a Snapshot into Docker.
+
+Universal options:
+ -o mountConflictDelay=X X is the number of seconds to delay a mount request
+ when there is a conflict (default is 0)
+
+Create options:
+ -o sizeInGiB=X X is the size of volume specified in GiB
+ -o size=X X is the size of volume specified in GiB (short form
+ of sizeInGiB)
+ -o fsOwner=X X is the user id and group id that should own the
+ root directory of the filesystem, in the form of
+ [userId:groupId]
+ -o fsMode=X X is 1 to 4 octal digits that represent the file
+ mode to be applied to the root directory of the
+ filesystem
+ -o description=X X is the text to be added to volume description
+ (optional)
+ -o perfPolicy=X X is the name of the performance policy (optional)
+ Performance Policies: Exchange 2003 data store,
+ Exchange log, Exchange 2007 data store,
+ SQL Server, SharePoint,
+ Exchange 2010 data store, SQL Server Logs,
+ SQL Server 2012, Oracle OLTP,
+ Windows File Server, Other Workloads,
+ DockerDefault, General, MariaDB,
+ Veeam Backup Repository,
+ Backup Repository
+
+ -o pool=X X is the name of pool in which to place the volume
+ Needed with -o folder (optional)
+ -o folder=X X is the name of folder in which to place the volume
+ Needed with -o pool (optional).
+ -o encryption indicates that the volume should be encrypted
+ (optional, dedupe and encryption are mutually
+ exclusive)
+ -o thick indicates that the volume should be thick provisioned
+ (optional, dedupe and thick are mutually exclusive)
+ -o dedupe indicates that the volume should be deduplicated
+ -o limitIOPS=X X is the IOPS limit of the volume. IOPS limit should
+ be in range [256, 4294967294] or -1 for unlimited.
+ -o limitMBPS=X X is the MB/s throughput limit for this volume. If
+ both limitIOPS and limitMBPS are specified, limitMBPS
+ must not be hit before limitIOPS
+ -o destroyOnRm indicates that the Nimble volume (including
+ snapshots) backing this volume should be destroyed
+ when this volume is deleted
+ -o syncOnUnmount only valid with "protectionTemplate", if the
+ protectionTemplate includes a replica destination,
+ unmount calls will snapshot and transfer the last
+ delta to the destination. (optional)
+ -o protectionTemplate=X X is the name of the protection template (optional)
+ Protection Templates: General, Retain-90Daily,
+ Retain-30Daily,
+ Retain-48Hourly-30Daily-52Weekly
+
+Clone options:
+ -o cloneOf=X X is the name of Docker Volume to create a clone of
+ -o snapshot=X X is the name of the snapshot to base the clone on
+ (optional, if missing, a new snapshot is created)
+ -o createSnapshot indicates that a new snapshot of the volume should be
+ taken and used for the clone (optional)
+ -o destroyOnRm indicates that the Nimble volume (including
+ snapshots) backing this volume should be destroyed
+ when this volume is deleted
+ -o destroyOnDetach indicates that the Nimble volume (including
+ snapshots) backing this volume should be destroyed
+ when this volume is unmounted or detached
+
+Import Volume options:
+ -o importVol=X X is the name of the Nimble Volume to import
+ -o pool=X X is the name of the pool in which the volume to be
+ imported resides (optional)
+ -o folder=X X is the name of the folder in which the volume to be
+ imported resides (optional)
+ -o forceImport forces the import of the volume. Note that
+ overwrites application metadata (optional)
+ -o restore restores the volume to the last snapshot taken on the
+ volume (optional)
+ -o snapshot=X X is the name of the snapshot which the volume will
+ be restored to, only used with -o restore (optional)
+ -o takeover indicates the current group will takeover the
+ ownership of the Nimble volume and volume collection
+ (optional)
+ -o reverseRepl reverses the replication direction so that writes to
+ the Nimble volume are replicated back to the group
+ where it was replicated from (optional)
+
+Import Clone of Snapshot options:
+ -o importVolAsClone=X X is the name of the Nimble Volume and Nimble
+ Snapshot to clone and import
+ -o snapshot=X X is the name of the Nimble snapshot to clone and
+ import (optional, if missing, will use the most
+ recent snapshot)
+ -o createSnapshot indicates that a new snapshot of the volume should be
+ taken and used for the clone (optional)
+ -o pool=X X is the name of the pool in which the volume to be
+ imported resides (optional)
+ -o folder=X X is the name of the folder in which the volume to be
+ imported resides (optional)
+ -o destroyOnRm indicates that the Nimble volume (including
+ snapshots) backing this volume should be destroyed
+ when this volume is deleted
+ -o destroyOnDetach indicates that the Nimble volume (including
+ snapshots) backing this volume should be destroyed
+ when this volume is unmounted or detached
+
+If you are considering using any Docker clustering technologies for your Docker deployment, it is important to understand the fencing mechanism used to protect data. Attaching the same Docker Volume to multiple containers on the same host is fully supported. Mounting the same volume on multiple hosts is not supported.
+Docker does not provide a fencing mechanism for nodes that have become disconnected from the Docker Swarm. This results in the isolated nodes continuing to run their containers. When the containers are rescheduled on a surviving node, the Docker Engine will request that the Docker Volume(s) be mounted. In order to prevent data corruption, the Docker Volume Plugin will stop serving the Docker Volume to the original node before mounting it on the newly requested node.
+During a mount request, the Docker Volume Plugin inspects the ACR (Access Control Record) on the volume. If the ACR does not match the initiator requesting to mount the volume, the ACR is removed and the volume taken offline. The volume is now fenced off and other nodes are unable to access any data in the volume.
+The volume then receives a new ACR matching the requesting initiator, and it is mounted for the container requesting the volume. This is done because the volumes are formatted with XFS, which is not a clustered filesystem and can be corrupted if the same volume is mounted to multiple hosts.
+The side effect of a fenced node is that I/O hangs indefinitely, and the initiator is rejected during login. If the fenced node rejoins the Docker Swarm using Docker SwarmKit, the swarm tries to shut down the services that were rescheduled elsewhere to maintain the desired replica set for the service. This operation will also hang indefinitely waiting for I/O.
+We recommend running a dedicated Docker host that does not host any other critical applications besides the Docker Engine. Doing this supports a safe way to reboot a node after a grace period and have it start cleanly when a hung task is detected. Otherwise, the node can remain in the hung state indefinitely.
+The following kernel parameters control the system behavior when a hung task is detected:
+
# Reset after these many seconds after a panic
+kernel.panic = 5
+
+# I do consider hung tasks reason enough to panic
+kernel.hung_task_panic = 1
+
+# To not panic in vain, I'll wait these many seconds before I declare a hung task
+kernel.hung_task_timeout_secs = 150
+
+Add these parameters to the /etc/sysctl.d/99-hung_task_timeout.conf
file and reboot the system.
Important
+Docker SwarmKit declares a node as failed after five (5) seconds. Services are then rescheduled and up and running again in less than ten (10) seconds. The parameters noted above provide the system a way to manage other tasks that may appear to be hung and avoid a system panic.
+These are some basic examples on how to use the HPE Nimble Storage Volume Plugin for Docker.
+Using docker volume create
.
Note
+The plugin applies a set of default options when you create
new volumes unless you override them using the volume create -o key=value
option flags.
Create a Docker volume with a custom description:
+
docker volume create -d nimble -o description="My volume description" --name myvol1
+
+(Optional) Inspect the new volume:
+
docker volume inspect myvol1
+
+(Optional) Attach the volume to an interactive container.
+
docker run -it --rm -v myvol1:/data bash
+
+The volume is mounted inside the container on /data
.
Use the docker volume create
command with the cloneOf
option to clone a Docker volume to a new Docker volume.
Clone the Docker volume named myvol1
to a new Docker volume named myvol1-clone
.
docker volume create -d nimble -o cloneOf=myvol1 --name=myvol1-clone
+
+(Optional) Select a snapshot on which to base the clone.
+
docker volume create -d nimble -o snapshot=mysnap1 -o cloneOf=myvol1 --name=myvol2-clone
+
+There are several ways to provision a Docker volume depending on what tools are used:
+The Docker Volume plugin leverages the existing Docker CLI and APIs, therefor all native Docker tools may be used to provision a volume.
+Note
+The plugin applies a set of default volume create options. Unless you override the default options using the volume option flags, the defaults are applied when you create volumes. For example, the default volume size is 10GiB.
+Config file volume-driver.json
, which is stored at /etc/hpe-storage/volume-driver.json:
{
+ "global": {},
+ "defaults": {
+ "sizeInGiB":"10",
+ "limitIOPS":"-1",
+ "limitMBPS":"-1",
+ "perfPolicy": "DockerDefault",
+ },
+ "overrides":{}
+}
+
+Before you begin
+Take the volume you want to import offline before importing it. For information about how to take a volume offline, refer to either the CLI Administration Guide
or the GUI Administration Guide
on HPE InfoSight. Use the create
command with the importVol
option to import an HPE Nimble Storage volume to Docker and name it.
Import the HPE Nimble Storage volume named mynimblevol
as a Docker volume named myvol3-imported
.
docker volume create –d nimble -o importVol=mynimblevol --name=myvol3-imported
+
+Use the create command with the importVolAsClone
option to import a HPE Nimble Storage volume snapshot as a Docker volume. Optionally, specify a particular snapshot on the HPE Nimble Storage volume using the snapshot option.
Import the HPE Nimble Storage snapshot aSnapshot
on the volume importMe
as a Docker volume named importedSnap
.
docker volume create -d nimble -o importVolAsClone=mynimblevol -o snapshot=mysnap1 --name=myvol4-clone
+
+Note
+If no snapshot is specified, the latest snapshot on the volume is imported.
+It's important that the volume to be restored is in an offline state on the array.
+If the volume snapshot is not specified, the last volume snapshot is used.
+
docker volume create -d nimble -o importVol=myvol1.docker -o forceImport -o restore -o snapshot=mysnap1 --name=myvol1-restored
+
+List Docker volumes.
+
docker volume ls
+DRIVER VOLUME NAME
+nimble:latest myvol1
+nimble:latest myvol1-clone
+
+When you remove volumes from Docker control they are set to the offline state on the array. Access to the volumes and related snapshots using the Docker Volume plugin can be reestablished.
+Note
+To delete volumes from the HPE Nimble Storage array using the remove command, the volume should have been created with a -o destroyOnRm
flag.
Important: Be aware that when this option is set to true, volumes and all related snapshots are deleted from the group, and can no longer be accessed by the Docker Volume plugin.
+Remove the volume named myvol1
.
docker volume rm myvol1
+
+The plugin can be removed using the docker plugin rm
command. This command will not remove the configuration directory (/etc/hpe-storage/
).
docker plugin rm nimble
+
+Important
+If this is the last plugin to reference the Nimble Group and to completely remove the configuration directory, follow the steps as below
+
docker plugin set nimble PROVIDER_REMOVE=true
+docker plugin enable nimble
+docker plugin rm nimble
+
+The config directory is at /etc/hpe-storage/
. When a plugin is installed and enabled, the Nimble Group certificates are created in the config directory.
ls -l /etc/hpe-storage/
+total 16
+-r-------- 1 root root 1159 Aug 2 00:20 container_provider_host.cert
+-r-------- 1 root root 1671 Aug 2 00:20 container_provider_host.key
+-r-------- 1 root root 1521 Aug 2 00:20 container_provider_server.cert
+
+Additionally there is a config file volume-driver.json
present at the same location. This file can be edited
+to set default parameters for create volumes for docker.
The docker plugin logs are located at /var/log/hpe-docker-plugin.log
Upgrading from 2.5.1 or older plugins, please follow the below steps
+Ubuntu 16.04 LTS and Ubuntu 18.04 LTS:
+
docker plugin disable nimble:latest –f
+docker plugin upgrade --grant-all-permissions nimble store/hpestorage/nimble:3.0.0 --skip-remote-check
+docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable nimble:latest
+
+Red Hat 7.5+, CentOS 7.5+, Oracle Enterprise Linux 7.5+ and Fedora 28+:
+
docker plugin disable nimble:latest –f
+docker plugin upgrade --grant-all-permissions nimble store/hpestorage/nimble:3.0.0 --skip-remote-check
+docker plugin enable nimble:latest
+
+Important
+In Swarm Mode, drain the existing running containers to the node where the plugin is upgraded.
+HPE Ezmeral Runtime Enterprise deploys and manages open source upstream Kubernetes clusters through its management console. It's also capable of importing foreign Kubernetes clusters. This guide describes the necessary steps to perform a successful deployment of the HPE CSI Driver for Kubernetes on HPE Ezmeral Runtime Enterprise managed clusters.
+It's up to the HPE Ezmeral Runtime Enterprise administrator who deploys Kubernetes clusters to ensure that the particular version of the CSI driver (i.e v2.0.0) is supported with the following components.
+Examine the table found in the Compatibility and Support section of the CSI driver overview. Particular Container Storage Providers may have additional prerequisites.
+In Ezmeral 5.4.0 and later, an exception has been added to the "hpe-storage" Namespace
. Proceed to Installation and disregard any steps outlined in this guide.
Note
+If the HPE CSI Driver built-in NFS Server Provisioner will be used, an exception needs to be granted to the "hpe-nfs" Namespace
.
Run:
kubectl patch --type json -p '[{"op": "add", "path": "/spec/match/excludedNamespaces/-", "value": "hpe-nfs"}]' k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-privileged-container
The CSI driver needs privileged access to the worker nodes to attach and detach storage devices. By default, an admission controller prevents all user deployed workloads access to the host filesystem. An exception needs to be created for the "hpe-storage" Namespace
.
As a Kubernetes cluster admin, run the following.
+
kubectl create ns hpe-storage
+kubectl patch --type json -p '[{"op":"add","path":"/spec/unrestrictedFsMountNamespaces/-","value":"hpe-storage"}]' hpecpconfigs/hpecp-global-config -n hpecp
+
+Caution
+In theory you may use any Namespace
name desired. This might change in a future release and it's encouraged to use "hpe-storage" for compatibility with upcoming releases of HPE Ezmeral Runtime Enterprise.
By not performing this configuration change, the following events will be seen on the CSI controller ReplicaSet
or CSI node DaemonSet
trying to schedule Pods
.
Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Warning FailedCreate 2m4s (x17 over 7m32s) replicaset-controller Error creating: admission webhook "soft-validate.hpecp.hpe.com" denied the request: Hostpath ("/") referenced in volume is not valid for this namespace because of FS Mount protections.
+
+Early versions of HPE Ezmeral Runtime Enterprise (HPE Container Platform, HPE Ezmeral Container Platform) contained a checkbox to deploy the HPE CSI Driver for Kubernetes. This method is not supported. Make sure clusters are deployed without the checkbox ticked.
+ +Continue with Installation.
+Any method to install the HPE CSI Driver for Kubernetes on an HPE Ezmeral Runtime Enterprise managed Kubernetes cluster is supported. Helm is strongly recommended. Make sure to deploy the CSI driver to the "hpe-storage" Namespace
for future compatibility.
Important
+In some deployments of Ezmeral the kubelet root has been relocated, in those circumstances you'll see errors similar to: Error: command mount failed with rc=32 err=mount: /dev/mapper/mpathh is already mounted or /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-uuid busy /dev/mapper/mpathh is already mounted on /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-uuid
. In this case it's recommended to install the CSI driver using Helm with the --set kubeletRootDir=/var/lib/docker/kubelet
parameter.
Expired content
+The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.
+The HPE Volume Driver for Kubernetes FlexVolume Plugin leverages HPE Nimble Storage or HPE Cloud Volumes to provide scalable and persistent storage for stateful applications.
+Important
+Using HPE Nimble Storage with Kubernetes 1.13 and newer, please use the HPE CSI Driver for Kubernetes.
+Source code and developer documentation is available in the hpe-storage/flexvolume-driver GitHub repo.
+The FlexVolume driver supports multiple backends that are based on a "container provider" architecture. Currently, Nimble and Cloud Volumes are supported.
+Driver | +HPE Nimble Storage Version | +Release Notes | +Blog | +
---|---|---|---|
v3.0.0 | +5.0.8.x and 5.1.3.x onwards | +v3.0.0 | +HPE Storage Tech Insiders | +
v3.1.0 | +5.0.8.x and 5.1.3.x onwards | +v3.1.0 | ++ |
Note: Synchronous replication (Peer Persistence) is not supported by the HPE Volume Driver for Kubernetes FlexVolume Plugin.
+Driver | +Release Notes | +Blog | +
---|---|---|
v3.1.0 | +v3.1.0 | +Using HPE Cloud Volumes with Amazon EKS | +
Important
+HPE Cloud Volumes was introduced in HPE CSI Driver for Kubernetes v1.5.0. Make sure to check if your cloud is supported by the CSI driver first.
+The recommended way to deploy and manage the HPE Volume Driver for Kubernetes FlexVolume Plugin is to use Helm. Please see the co-deployments repository for further information.
+Use the following steps for a manual installation.
+Replace the password
string (YWRtaW4=
) below with a base64 encoded version of your password and replace the backend
with your array IP address and save it as hpe-secret.yaml
.
apiVersion: v1
+kind: Secret
+metadata:
+ name: hpe-secret
+ namespace: kube-system
+stringData:
+ backend: 192.168.1.1
+ username: admin
+ protocol: "iscsi"
+data:
+ # echo -n "admin" | base64
+ password: YWRtaW4=
+
+Replace the username
and password
strings (YWRtaW4=
) with a base64 encoded version of your HPE Cloud Volumes "access_key" and "access_secret". Also, replace the backend
with HPE Cloud Volumes portal fully qualified domain name (FQDN) and save it as hpe-secret.yaml
.
apiVersion: v1
+kind: Secret
+metadata:
+ name: hpe-secret
+ namespace: kube-system
+stringData:
+ backend: cloudvolumes.hpe.com
+ protocol: "iscsi"
+ serviceName: cv-cp-svc
+ servicePort: "8080"
+data:
+ # echo -n "<my very confidential access key>" | base64
+ username: YWRtaW4=
+ # echo -n "<my very confidential secret key>" | base64
+ password: YWRtaW4=
+
+Create the secret:
+
kubectl create -f hpe-secret.yaml
+secret "hpe-secret" created
+
+You should now see the HPE secret in the kube-system
namespace.
kubectl get secret/hpe-secret -n kube-system
+NAME TYPE DATA AGE
+hpe-secret Opaque 5 3s
+
+The ConfigMap
is used to set and tweak defaults for both the FlexVolume driver and Dynamic Provisioner.
Edit the below default parameters as required for FlexVolume driver and save it as hpe-config.yaml
.
kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: hpe-config
+ namespace: kube-system
+data:
+ volume-driver.json: |-
+ {
+ "global": {},
+ "defaults": {
+ "limitIOPS":"-1",
+ "limitMBPS":"-1",
+ "perfPolicy": "Other"
+ },
+ "overrides":{}
+ }
+
+Tip
+Please see Advanced for more volume-driver.json
configuration options.
Edit the below parameters as required with your public cloud info and save it as hpe-config.yaml
.
kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: hpe-config
+ namespace: kube-system
+data:
+ volume-driver.json: |-
+ {
+ "global": {
+ "snapPrefix": "BaseFor",
+ "initiators": ["eth0"],
+ "automatedConnection": true,
+ "existingCloudSubnet": "10.1.0.0/24",
+ "region": "us-east-1",
+ "privateCloud": "vpc-data",
+ "cloudComputeProvider": "Amazon AWS"
+ },
+ "defaults": {
+ "limitIOPS": 1000,
+ "fsOwner": "0:0",
+ "fsMode": "600",
+ "description": "Volume provisioned by the HPE Volume Driver for Kubernetes FlexVolume Plugin",
+ "perfPolicy": "Other",
+ "protectionTemplate": "twicedaily:4",
+ "encryption": true,
+ "volumeType": "PF",
+ "destroyOnRm": true
+ },
+ "overrides": {
+ }
+ }
+
+Create the ConfigMap
:
kubectl create -f hpe-config.yaml
+configmap/hpe-config created
+
+Deploy the driver as a DaemonSet
and the dynamic provisioner as a Deployment
.
Version 3.0.0:
+
kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-nimble-storage/hpe-flexvolume-driver-v3.0.0.yaml
+
+Version 3.1.0:
+
kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-nimble-storage/hpe-flexvolume-driver-v3.1.0.yaml
+
+Container-Provider Service:
+
kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-cp-v3.1.0.yaml
+
+The FlexVolume driver have different declarations depending on the Kubernetes distribution.
+Amazon EKS:
+
kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-aws-flexvolume-driver-v3.1.0.yaml
+
+Microsoft Azure AKS:
+
kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-azure-flexvolume-driver-v3.1.0.yaml
+
+Generic:
+
kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-flexvolume-driver-v3.1.0.yaml
+
+Note
+The declarations for HPE Volume Driver for Kubernetes FlexVolume Plugin can be found in the co-deployments repository.
+Check to see all hpe-flexvolume-driver
Pods
(one per compute node) and the hpe-dynamic-provisioner
Pod are running.
kubectl get pods -n kube-system
+NAME READY STATUS RESTARTS AGE
+hpe-flexvolume-driver-2rdt4 1/1 Running 0 45s
+hpe-flexvolume-driver-md562 1/1 Running 0 44s
+hpe-flexvolume-driver-x4k96 1/1 Running 0 44s
+hpe-dynamic-provisioner-59f9d495d4-hxh29 1/1 Running 0 24s
+
+For HPE Cloud Volumes, check that hpe-cv-cp
pod is running as well.
kubectl get pods -n kube-system -l=app=cv-cp
+NAME READY STATUS RESTARTS AGE
+hpe-cv-cp-2rdt4 1/1 Running 0 45s
+
+Get started using the FlexVolume driver by setting up StorageClass
, PVC
API objects. See Using for examples.
These instructions are provided as an example on how to use the HPE Volume Driver for Kubernetes FlexVolume Plugin with a HPE Nimble Storage Array.
+The below YAML declarations are meant to be created with kubectl create
. Either copy the content to a file on the host where kubectl
is being executed, or copy & paste into the terminal, like this:
kubectl create -f-
+< paste the YAML >
+^D (CTRL + D)
+
+Tip
+Some of the examples supported by the HPE Volume Driver for Kubernetes FlexVolume Plugin are available for HPE Nimble Storage or HPE Cloud Volumes in the GitHub repo.
+To get started, create a StorageClass
API object referencing the hpe-secret
and defining additional (optional) StorageClass
parameters:
Sample storage classes can be found for HPE Nimble Storage and HPE Cloud Volumes.
+Hint
+See StorageClass
parameters for HPE Nimble Storage and HPE Clound Volumes for a comprehensive overview.
Create a StorageClass
with volume parameters as required.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: sc-nimble
+provisioner: hpe.com/nimble
+parameters:
+ description: "Volume from HPE FlexVolume driver"
+ perfPolicy: "Other Workloads"
+ limitIOPS: "76800"
+
+Create a PersistentVolumeClaim
. This makes sure a volume is created and provisioned on your behalf:
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: pvc-nimble
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: sc-nimble
+
+Check that a new PersistentVolume
is created based on your claim:
kubectl get pv
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+sc-nimble-13336da3-7ca3-11e9-826c-00505693581f 10Gi RWO Delete Bound default/pvc-nimble sc-nimble 3s
+
+The above output means that the FlexVolume driver successfully provisioned a new volume and bound to the requesting PVC
to a new PV
. The volume is not attached to any node yet. It will only be attached to a node if a workload is scheduled to a specific node. Now let us create a Pod
that refers to the above volume. When the Pod
is created, the volume will be attached, formatted and mounted to the specified container:
kind: Pod
+apiVersion: v1
+metadata:
+ name: pod-nimble
+spec:
+ containers:
+ - name: pod-nimble-con-1
+ image: nginx
+ command: ["bin/sh"]
+ args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+ volumeMounts:
+ - name: export1
+ mountPath: /data
+ - name: pod-nimble-cont-2
+ image: debian
+ command: ["bin/sh"]
+ args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+ volumeMounts:
+ - name: export1
+ mountPath: /data
+ volumes:
+ - name: export1
+ persistentVolumeClaim:
+ claimName: pvc-nimble
+
+Check if the pod is running successfully:
+
kubectl get pod pod-nimble
+NAME READY STATUS RESTARTS AGE
+pod-nimble 2/2 Running 0 2m29s
+
+This StorageClass
examples help guide combinations of options when provisioning volumes.
This StorageClass
creates thinly provisioned volumes with deduplication turned on. It will also apply the Performance Policy "SQL Server" along with a Protection Template. The Protection Template needs to be defined on the array.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: oltp-prod
+provisioner: hpe.com/nimble
+parameters:
+ thick: "false"
+ dedupe: "true"
+ perfPolicy: "SQL Server"
+ protectionTemplate: "Retain-48Hourly-30Daily-52Weekly"
+
+This StorageClass
will create clones of a "production" volume and throttle the performance of each clone to 1000 IOPS. When the PVC is deleted, it will be permanently deleted from the backend array.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: oltp-dev-clone-of-prod
+provisioner: hpe.com/nimble
+parameters:
+ limitIOPS: "1000"
+ cloneOf: "oltp-prod-1adee106-110b-11e8-ac84-00505696c45f"
+ destroyOnRm: "true"
+
+This StorageClass
will clone a standard backend volume (without container metadata on it) from a particular pool on the backend.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: import-clone-legacy-prod
+rovisioner: hpe.com/nimble
+parameters:
+ pool: "flash"
+ importVolAsClone: "production-db-vol"
+ destroyOnRm: "true"
+
+This StorageClass
will import an existing Nimble volume to Kubernetes. The source volume needs to be offline for the import to succeed.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: import-clone-legacy-prod
+provisioner: hpe.com/nimble
+parameters:
+ pool: "flash"
+ importVol: "production-db-vol"
+
+The HPE Dynamic Provisioner for Kubernetes understands a set of annotation keys a user can set on a PVC
. If the corresponding keys exists in the list of the allowOverrides
key in the StorageClass
, the end-user can tweak certain aspects of the provisioning workflow. This opens up for very advanced data services.
StorageClass object:
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: my-sc
+provisioner: hpe.com/nimble
+parameters:
+ description: "Volume provisioned by StorageClass my-sc"
+ dedupe: "false"
+ destroyOnRm: "true"
+ perfPolicy: "Windows File Server"
+ folder: "myfolder"
+ allowOverrides: snapshot,limitIOPS,perfPolicy
+
+PersistentVolumeClaim object:
+
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc
+ annotations:
+ hpe.com/description: "This is my custom description"
+ hpe.com/limitIOPS: "8000"
+ hpe.com/perfPolicy: "SQL Server"
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: my-sc
+
+This will create a PV
of 8000 IOPS with the Performance Policy of "SQL Server" and a custom volume description.
Using a StorageClass
to clone a PV
is practical when there's needs to clone across namespaces (for example from prod to test or stage). If a user wants to clone any arbitrary volume, it becomes a bit tedious to create a StorageClass
for each clone. The annotation hpe.com/CloneOfPVC
allows a user to clone any PVC
within a namespace.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc-clone
+ annotations:
+ hpe.com/cloneOfPVC: my-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: my-sc
+
+This section highlights all the available StorageClass
parameters that are supported.
A StorageClass
is used to provision or clone an HPE Nimble Storage-backed persistent volume. It can also be used to import an existing HPE Nimble Storage volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.
A sample StorageClass is provided.
+Note
+These are optional parameters.
+These parameters are mutable betweeen a parent volume and creating a clone from a snapshot.
+Parameter | +String | +Description | +
---|---|---|
nameSuffix | +Text | +Suffix to append to Nimble volumes. Defaults to .docker | +
destroyOnRm | +Boolean | +Indicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted. | +
limitIOPS | +Integer | +The IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default). | +
limitMBPS | +Integer | +The MB/s throughput limit for the volume. | +
description | +Text | +Text to be added to the volume's description on the Nimble array. | +
perfPolicy | +Text | +The name of the performance policy to assign to the volume. Default example performance policies include "Backup Repository", "Exchange 2003 data store", "Exchange 2007 data store", "Exchange 2010 data store", "Exchange log", "Oracle OLTP", "Other Workloads", "SharePoint", "SQL Server", "SQL Server 2012", "SQL Server Logs". | +
protectionTemplate | +Text | +The name of the protection template to assign to the volume. Default examples of protection templates include "Retain-30Daily", "Retain-48Hourly-30aily-52Weekly", and "Retain-90Daily". | +
folder | +Text | +The name of the Nimble folder in which to place the volume. | +
thick | +Boolean | +Indicates that the volume should be thick provisioned. | +
dedupeEnabled | +Boolean | +Indicates that the volume should enable deduplication. | +
syncOnUnmount | +Boolean | +Indicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node. | +
Note
+Performance Policies, Folders and Protection Templates are Nimble specific constructs that can be created on the Nimble array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight.
+These parameters are immutable for clones once a volume has been created.
+Parameter | +String | +Description | +
---|---|---|
fsOwner | +userId:groupId | +The user id and group id that should own the root directory of the filesystem. | +
fsMode | +Octal digits | +1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. | +
encryption | +Boolean | +Indicates that the volume should be encrypted. | +
pool | +Text | +The name of the pool in which to place the volume. | +
Cloning supports two modes of cloning. Either use cloneOf
and reference a PVC in the current namespace or use importVolAsClone
and reference a Nimble volume name to clone and import to Kubernetes.
Parameter | +String | +Description | +
---|---|---|
cloneOf | +Text | +The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. |
+
importVolAsClone | +Text | +The name of the Nimble volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. |
+
snapshot | +Text | +The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. | +
createSnapshot | +Boolean | +Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. |
+
snapshotPrefix | +Text | +A prefix to add to the beginning of the snapshot name. | +
Importing volumes to Kubernetes requires the source Nimble volume to be offline. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE Volume Driver for Kubernetes FlexVolume Plugin.
+Parameter | +String | +Description | +
---|---|---|
importVol | +Text | +The name of the Nimble volume to import. | +
snapshot | +Text | +The name of the Nimble snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored. | +
restore | +Boolean | +Restores the volume to the last snapshot taken on the volume. | +
takeover | +Boolean | +Indicates the current group will takeover ownership of the Nimble volume and volume collection. This should be performed against a downstream replica. | +
reverseRepl | +Boolean | +Reverses the replication direction so that writes to the Nimble volume are replicated back to the group where it was replicated from. | +
forceImport | +Boolean | +Forces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead. | +
Note
+HPE Nimble Docker Volume workflows works with a 1-1 mapping between volume and volume collection.
+A StorageClass
is used to provision or clone an HPE Cloud Volumes-backed persistent volume. It can also be used to import an existing HPE Cloud Volumes volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.
A sample StorageClass is provided.
+Note
+These are optional parameters.
+These parameters are mutable betweeen a parent volume and creating a clone from a snapshot.
+Parameter | +String | +Description | +
---|---|---|
nameSuffix | +Text | +Suffix to append to Cloud Volumes. | +
destroyOnRm | +Boolean | +Indicates the backing Cloud volume (including snapshots) should be destroyed when the PVC is deleted. | +
limitIOPS | +Integer | +The IOPS limit of the volume. The IOPS limit should be in the range 300 to 50000. | +
perfPolicy | +Text | +The name of the performance policy to assign to the volume. Default example performance policies include "Other, Exchange, Oracle, SharePoint, SQL, Windows File Server". | +
protectionTemplate | +Text | +The name of the protection template to assign to the volume. Default examples of protection templates include "daily:3, daily:7, daily:14, hourly:6, hourly:12, hourly:24, twicedaily:4, twicedaily:8, twicedaily:14, weekly:2, weekly:4, weekly:8, monthly:3, monthly:6, monthly:12 or none". | +
volumeType | +Text | +Cloud Volume type. Supported types are PF and GPF. | +
These parameters are immutable for clones once a volume has been created.
+Parameter | +String | +Description | +
---|---|---|
fsOwner | +userId:groupId | +The user id and group id that should own the root directory of the filesystem. | +
fsMode | +Octal digits | +1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. | +
encryption | +Boolean | +Indicates that the volume should be encrypted. | +
Cloning supports two modes of cloning. Either use cloneOf
and reference a PVC in the current namespace or use importVolAsClone
and reference a Cloud volume name to clone and import to Kubernetes.
Parameter | +String | +Description | +
---|---|---|
cloneOf | +Text | +The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. |
+
importVolAsClone | +Text | +The name of the Cloud Volume volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. |
+
snapshot | +Text | +The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. | +
createSnapshot | +Boolean | +Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. |
+
snapshotPrefix | +Text | +A prefix to add to the beginning of the snapshot name. | +
replStore | +Text | +Replication store name. Should be used with importVolAsClone parameter to clone a replica volume | +
Importing volumes to Kubernetes requires the source Cloud volume to be not attached to any nodes. All previous Access Control Records will be stripped from the volume when put under control of the HPE Volume Driver for Kubernetes FlexVolume Plugin.
+Parameter | +String | +Description | +
---|---|---|
importVol | +Text | +The name of the Cloud volume to import. | +
forceImport | +Boolean | +Forces the import of a volume that is provisioned by another K8s cluster but not attached to any nodes. | +
This section outlines a few troubleshooting steps for the HPE Volume Driver for Kubernetes Plugin. This product is supported by HPE, please consult with your support organization (Nimble, Cloud Volumes etc) prior attempting any configuration changes.
+The FlexVolume driver is a binary executed by the kubelet to perform mount/unmount/attach/detach operations as workloads request storage resources. The binary relies on communicating with a socket on the host where the volume plugin responsible for the MUAD operations perform control-plane or data-plane operations against the backend system hosting the actual volumes.
+The driver has a configuration file where certain defaults can be tweaked to accommodate a certain behavior. Under normal circumstances, this file does not need any tweaking.
+The name and the location of the binary varies based on Kubernetes distribution (the default 'exec' path) and what backend driver is being used. In a typical scenario, using Nimble, this is expected:
+/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble
/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble.json
By default, it contains only the path to the socket file for the volume plugin:
+
{
+ "dockerVolumePluginSocketPath": "/etc/hpe-storage/nimble.sock"
+}
+
+Valid options for the FlexVolume driver can be inspected by executing the binary on the host with the config
argument:
/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble config
+Error processing option 'logFilePath' - key:logFilePath not found
+Error processing option 'logDebug' - key:logDebug not found
+Error processing option 'supportsCapabilities' - key:supportsCapabilities not found
+Error processing option 'stripK8sFromOptions' - key:stripK8sFromOptions not found
+Error processing option 'createVolumes' - key:createVolumes not found
+Error processing option 'listOfStorageResourceOptions' - key:listOfStorageResourceOptions not found
+Error processing option 'factorForConversion' - key:factorForConversion not found
+Error processing option 'enable1.6' - key:enable1.6 not found
+
+Driver=nimble Version=v2.5.1-50fbff2aa14a693a9a18adafb834da33b9e7cc89
+Current Config:
+ dockerVolumePluginSocketPath = /etc/hpe-storage/nimble.sock
+ stripK8sFromOptions = true
+ logFilePath = /var/log/dory.log
+ logDebug = false
+ createVolumes = false
+ enable1.6 = false
+ factorForConversion = 1073741824
+ listOfStorageResourceOptions = [size sizeInGiB]
+ supportsCapabilities = true
+
+An example tweak could be to enable debug logging and enable support for Kubernetes 1.6 (which we don't officially support). The config file would then end up like this:
+
{
+ "dockerVolumePluginSocketPath": "/etc/hpe-storage/nimble.sock",
+ "logDebug": true,
+ "enable1.6": true
+}
+
+Execute the binary again (nimble config
) to ensure the parameters and config file gets parsed correctly. Since the config file is read on each FlexVolume operation, no restart of anything is needed.
See Advanced for more parameters for the driver.json
file.
To verify the FlexVolume binary can actually communicate with the backend volume plugin, issue a faux mount request:
+
/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble mount no/op '{"name":"myvol1"}'
+
+If the FlexVolume driver can successfully communicate with the volume plugin socket:
+
{"status":"Failure","message":"configured to NOT create volumes"}
+
+In the case of any other output, check if the backend volume plugin is alive with curl
:
curl --unix-socket /etc/hpe-storage/nimble.sock -d '{}' http://localhost/VolumeDriver.Capabilities
+
+It should output:
+
{"capabilities":{"scope":"global"},"Err":""}
+
+Log files associated with the HPE Volume Driver for Kubernetes FlexVolume Plugin logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution. Some of the logs on the host are persisted which follow standard logrotate policies.
+FlexVolume driver logs:
+
kubectl logs -f daemonset.apps/hpe-flexvolume-driver -n kube-system
+
+The logs are persisted at /var/log/hpe-docker-plugin.log
and /var/log/dory.log
Dynamic Provisioner logs:
+
kubectl logs -f deployment.apps/hpe-dynamic-provisioner -n kube-system
+
+The logs are persisted at /var/log/hpe-dynamic-provisioner.log
Log collector script hpe-logcollector.sh
can be used to collect diagnostic logs using kubectl
Download the script as follows:
+
curl -O https://raw.githubusercontent.com/hpe-storage/flexvolume-driver/master/hpe-logcollector.sh
+chmod 555 hpe-logcollector.sh
+
+Usage:
+
./hpe-logcollector.sh -h
+Diagnostic Script to collect HPE Storage logs using kubectl
+
+Usage:
+ hpe-logcollector.sh [-h|--help][--node-name NODE_NAME][-n|--namespace NAMESPACE][-a|--all]
+Where
+-h|--help Print the Usage text
+--node-name NODE_NAME where NODE_NAME is kubernetes Node Name needed to collect the
+ hpe diagnostic logs of the Node
+-n|--namespace NAMESPACE where NAMESPACE is namespace of the pod deployment. default is kube-system
+-a|--all collect diagnostic logs of all the nodes.If
+ nothing is specified logs would be collected
+ from all the nodes
+
+This section describes some of the advanced configuration steps available to tweak behavior of the HPE Volume Driver for Kubernetes FlexVolume Plugin.
+During normal operations, defaults are set in either the ConfigMap
or in a StorageClass
itself. The picking order is:
Please see Diagnostics to locate the driver for your particular environment. Add this object to the configuration file, nimble.json
, for example:
{
+ "defaultOptions": [{"option1": "value1"}, {"option2": "value2"}]
+}
+
+Where option1
and option2
are valid backend volume plugin create options.
Note
+It's highly recommended to control defaults with StorageClass
API objects or the ConfigMap
.
Each driver supports setting certain "global" options in the ConfigMap
. Some options are common, some are driver specific.
Parameter | +String | +Description | +
---|---|---|
volumeDir | +Text | +Root directory on the host to mount the volumes. This parameter needs correlation with the podsmountdir path in the volumeMounts stanzas of the deployment. |
+
logDebug | +Boolean | +Turn on debug logging, set to false by default. | +
Expired content
+The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.
+The Open Source project Dory was designed in 2017 to transition Docker Volume plugins to be used with Kubernetes. Dory is the shim between the FlexVolume exec calls to the Docker Volume API.
+ +The main repository is not currently maintained and the most up-to-date version lives in the HPE Volume Driver for Kubernetes FlexVolume Plugin repository where Dory is packaged as a privileged DaemonSet to support HPE storage products. There may be other forks associated with other Docker Volume plugins out there.
+Why is the driver called Dory?
+Dory speaks whale!
+As the FlexVolume Plugin doesn't provide any dynamic provisioning, HPE designed a provisioner to work with Docker Volume plugins as well, Doryd, to have a complete solution for Docker Volume plugins. It's run as a Deployment and monitor PVC requests.
+According to the Kubernetes SIG storage community, the FlexVolume Plugin interface will continue to be supported.
+HPE encourages using the available CSI drivers for Kubernetes 1.13 and newer where available.
+ +Expired content
+The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.
+The HPE 3PAR and Primera Volume Plug-in for Docker leverages Ansible to deploy the 3PAR/Primera driver for Kubernetes in order to provide scalable and persistent storage for stateful applications.
+Important
+Using HPE 3PAR/Primera Storage with Kubernetes 1.15 and newer, please use the HPE CSI Driver for Kubernetes.
+Source code is available in the hpe-storage/python-hpedockerplugin GitHub repo.
+Refer to the SPOCK page for the latest support matrix for HPE 3PAR and HPE Primera Volume Plug-in for Docker.
+The HPE 3PAR/Primera FlexVolume driver supports multiple backends that are based on a "container provider" architecture.
+Ensure that you have reviewed the System Requirements.
+Driver | +HPE 3PAR/Primera OS Version | +Release Notes | +
---|---|---|
v3.3.1 | +3PAR OS: 3.3.1 MU5+ Primera OS: 4.0+ |
+v3.3.1 | +
Note: Refer to SPOCK page for the latest support matrix for HPE 3PAR and HPE Primera Volume Plug-in for Docker.
+The recommended way to deploy and manage the HPE 3PAR and Primera Volume Plug-in for Kubernetes is to use Ansible.
+Use the following steps to configure Ansible to perform the installation.
+Ensure that Ansible (v2.5 to v2.8) is installed. For more information, see Ansible Installation Guide.
+NOTE: Ansible only needs to be installed on the machine that will be performing the deployment. Ansible does not need to be installed on your Kubernetes cluster.
+
$ pip install ansible
+$ ansible --version
+ansible 2.7.12
+
+Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
+Confirm that you can connect using SSH to all the nodes in your Kubernetes cluster using the same username. If necessary, add your public SSH key to the authorized_keys
file on those systems.
$ cd ~
+$ git clone https://github.com/hpe-storage/python-hpedockerplugin
+
+Modify the hosts
file to define the Kubernetes/OpenShift Master and Worker nodes. Also define where the HPE etcd cluster will be deployed, this can be done within the cluster or on external servers.
$ vi python-hpedockerplugin/ansible_3par_docker_plugin/hosts
+
[masters]
+192.168.1.51
+
+[workers]
+192.168.1.52
+192.168.1.53
+
+[etcd]
+192.168.1.51
+192.168.1.52
+192.168.1.53
+
Create the properties/plugin_configuration_properties.yml based on your HPE 3PAR/Primera Storage array configuration.
+
$ vi python-hpedockerplugin/ansible_3par_docker_plugin/properties/plugin_configuration_properties.yml
+
+NOTE: Some of the properties are mandatory and must be specified in the properties file while others are optional.
+
INVENTORY:
+ DEFAULT:
+#Mandatory Parameters--------------------------------------------------------------------------------
+
+ # Specify the port to be used by HPE 3PAR plugin etcd cluster
+ host_etcd_port_number: 23790
+ # Plugin Driver - iSCSI
+ hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
+ hpe3par_ip: <3par_array_IP>
+ hpe3par_username: <3par_user>
+ hpe3par_password: <3par_password>
+ #Specify the 3PAR port - 8080 default
+ hpe3par_port: 8080
+ hpe3par_cpg: <cpg_name>
+
+ # Plugin version - Required only in DEFAULT backend
+ volume_plugin: hpestorage/legacyvolumeplugin:3.3.1
+ # Dory installer version - Required for Openshift/Kubernetes setup
+ # Supported versions are dory_installer_v31, dory_installer_v32
+ dory_installer_version: dory_installer_v32
+
+#Optional Parameters--------------------------------------------------------------------------------
+
+ logging: DEBUG
+ hpe3par_snapcpg: FC_r6
+ #hpe3par_iscsi_chap_enabled: True
+ use_multipath: True
+ #enforce_multipath: False
+ #vlan_tag: True
+
+
+Available Properties Parameters
+Property | +Mandatory | +Default Value | +Description | +
---|---|---|---|
hpedockerplugin_driver | +Yes | +No default value | +ISCSI/FC driver (hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver/hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver) | +
hpe3par_ip | +Yes | +No default value | +IP address of 3PAR array | +
hpe3par_username | +Yes | +No default value | +3PAR username | +
hpe3par_password | +Yes | +No default value | +3PAR password | +
hpe3par_port | +Yes | +8080 | +3PAR HTTP_PORT port | +
hpe3par_cpg | +Yes | +No default value | +Primary user CPG | +
volume_plugin | +Yes | +No default value | +Name of the docker volume image (only required with DEFAULT backend) | +
encryptor_key | +No | +No default value | +Encryption key string for 3PAR password | +
logging | +No | +INFO | +Log level | +
hpe3par_debug | +No | +No default value | +3PAR log level | +
suppress_requests_ssl_warning | +No | +True | +Suppress request SSL warnings | +
hpe3par_snapcpg | +No | +hpe3par_cpg | +Snapshot CPG | +
hpe3par_iscsi_chap_enabled | +No | +False | +ISCSI chap toggle | +
hpe3par_iscsi_ips | +No | +No default value | +Comma separated iscsi port IPs (only required if driver is ISCSI based) | +
use_multipath | +No | +False | +Mutltipath toggle | +
enforce_multipath | +No | +False | +Forcefully enforce multipath | +
ssh_hosts_key_file | +No | +/root/.ssh/known_hosts | +Path to hosts key file | +
quorum_witness_ip | +No | +No default value | +Quorum witness IP | +
mount_prefix | +No | +No default value | +Alternate mount path prefix | +
hpe3par_iscsi_ips | +No | +No default value | +Comma separated iscsi IPs. If not provided, all iscsi IPs will be read from the array and populated in hpe.conf | +
vlan_tag | +No | +False | +Populates the iscsi_ips which are vlan tagged, only applicable if hpe3par_iscsi_ips is not specified | +
replication_device | +No | +No default value | +Replication backend properties | +
dory_installer_version | +No | +dory_installer_v32 | +Required for Openshift/Kubernetes setup. Dory installer version, supported versions are dory_installer_v31, dory_installer_v32 | +
hpe3par_server_ip_pool | +Yes | +No default value | +This parameter is specific to fileshare. It can be specified as a mix of range of IPs and individual IPs delimited by comma. Each range or individual IP must be followed by the corresponding subnet mask delimited by semi-colon E.g.: IP-Range:Subnet-Mask,Individual-IP:SubnetMask | +
hpe3par_default_fpg_size | +No | +No default value | +This parameter is specific to fileshare. Default fpg size, It must be in the range 1TiB to 64TiB. If not specified here, it defaults to 16TiB | +
Hint
+Refer to Replication Support for details on enabling Replication support.
+
#Mandatory Parameters for Filepersona---------------------------------------------------------------
+ DEFAULT_FILE:
+ # Specify the port to be used by HPE 3PAR plugin etcd cluster
+ host_etcd_port_number: 23790
+ # Plugin Driver - File driver
+ hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_file.HPE3PARFileDriver
+ hpe3par_ip: 192.168.2.50
+ hpe3par_username: demo_user
+ hpe3par_password: demo_pass
+ hpe3par_cpg: demo_cpg
+ hpe3par_port: 8080
+ hpe3par_server_ip_pool: 192.168.98.3-192.168.98.10:255.255.192.0
+#Optional Parameters for Filepersona----------------------------------------------------------------
+ hpe3par_default_fpg_size: 16
+
+
INVENTORY:
+ DEFAULT:
+#Mandatory Parameters-------------------------------------------------------------------------------
+
+ # Specify the port to be used by HPE 3PAR plugin etcd cluster
+ host_etcd_port_number: 23790
+ # Plugin Driver - iSCSI
+ hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
+ hpe3par_ip: 192.168.1.50
+ hpe3par_username: 3paradm
+ hpe3par_password: 3pardata
+ hpe3par_port: 8080
+ hpe3par_cpg: FC_r6
+
+ # Plugin version - Required only in DEFAULT backend
+ volume_plugin: hpestorage/legacyvolumeplugin:3.3.1
+ # Dory installer version - Required for Openshift/Kubernetes setup
+ # Supported versions are dory_installer_v31, dory_installer_v32
+ dory_installer_version: dory_installer_v32
+
+#Optional Parameters--------------------------------------------------------------------------------
+
+ #ssh_hosts_key_file: '/root/.ssh/known_hosts'
+ logging: DEBUG
+ #hpe3par_debug: True
+ #suppress_requests_ssl_warning: True
+ #hpe3par_snapcpg: FC_r6
+ #hpe3par_iscsi_chap_enabled: True
+ #use_multipath: False
+ #enforce_multipath: False
+ #vlan_tag: True
+
+#Additional Backend (Optional)----------------------------------------------------------------------
+
+ 3PAR1:
+#Mandatory Parameters-------------------------------------------------------------------------------
+
+ # Specify the port to be used by HPE 3PAR plugin etcd cluster
+ host_etcd_port_number: 23790
+ # Plugin Driver - Fibre Channel
+ hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver
+ hpe3par_ip: 192.168.2.50
+ hpe3par_username: 3paradm
+ hpe3par_password: 3pardata
+ hpe3par_port: 8080
+ hpe3par_cpg: FC_r6
+
+#Optional Parameters--------------------------------------------------------------------------------
+
+ #ssh_hosts_key_file: '/root/.ssh/known_hosts'
+ logging: DEBUG
+ #hpe3par_debug: True
+ #suppress_requests_ssl_warning: True
+ hpe3par_snapcpg: FC_r6
+ #use_multipath: False
+ #enforce_multipath: False
+
+
$ cd python-hpedockerplugin/ansible_3par_docker_plugin/
+$ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml
+
+
Installer should not show any failures and PLAY RECAP should look like below
+
+PLAY RECAP ***********************************************************************
+<Master1-IP> : ok=85 changed=33 unreachable=0 failed=0
+<Master2-IP> : ok=76 changed=29 unreachable=0 failed=0
+<Master3-IP> : ok=76 changed=29 unreachable=0 failed=0
+<Worker1-IP> : ok=70 changed=27 unreachable=0 failed=0
+<Worker2-IP> : ok=70 changed=27 unreachable=0 failed=0
+localhost : ok=9 changed=3 unreachable=0 failed=0
+
+
$ docker ps | grep plugin; ssh <Master2-IP> "docker ps | grep plugin";ssh <Master3-IP> "docker ps | grep plugin";ssh <Worker1-IP> "docker ps | grep plugin";ssh <Worker2-IP> "docker ps | grep plugin"
+51b9d4b1d591 hpestorage/legacyvolumeplugin:3.3.1 "/bin/sh -c ./plugin…" 12 minutes ago Up 12 minutes plugin_container
+a43f6d8f5080 hpestorage/legacyvolumeplugin:3.3.1 "/bin/sh -c ./plugin…" 12 minutes ago Up 12 minutes plugin_container
+a88af9f46a0d hpestorage/legacyvolumeplugin:3.3.1 "/bin/sh -c ./plugin…" 12 minutes ago Up 12 minutes plugin_container
+5b20f16ab3af hpestorage/legacyvolumeplugin:3.3.1 "/bin/sh -c ./plugin…" 12 minutes ago Up 12 minutes plugin_container
+b0813a22cbd8 hpestorage/legacyvolumeplugin:3.3.1 "/bin/sh -c ./plugin…" 12 minutes ago Up 12 minutes plugin_container
+
+
kubectl get pods -n kube-system | grep doryd
+NAME READY STATUS RESTARTS AGE
+kube-storage-controller-doryd-7dd487b446-xr6q2 1/1 Running 0 45s
+
+Get started using the FlexVolume driver by setting up StorageClass
, PVC
API objects. See Using for examples.
These instructions are provided as an example on how to use the HPE 3PAR/Primera Volume Plug-in with a HPE 3PAR/Primera Storage Array.
+The below YAML declarations are meant to be created with kubectl create
. Either copy the content to a file on the host where kubectl
is being executed, or copy & paste into the terminal, like this:
kubectl create -f-
+< paste the YAML >
+^D (CTRL + D)
+
+Tip
+Some of the examples supported by the HPE 3PAR/Primera FlexVolume driver are available for HPE 3PAR/Primera Storage in the GitHub repo.
+To get started, create a StorageClass
API object referencing the hpe-secret
and defining additional (optional) StorageClass
parameters:
Sample storage classes can be found for HPE 3PAR/Primera Storage.
+Create a StorageClass
with volume parameters as required. Change the CPG per your requirements.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: sc-gold
+provisioner: hpe.com/hpe
+parameters:
+ provisioning: 'full'
+ cpg: 'SSD_r6'
+ fsOwner: '1001:1001'
+
+Create a PersistentVolumeClaim
. This makes sure a volume is created and provisioned on your behalf:
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: sc-gold-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 25Gi
+ storageClassName: sc-gold
+
+Check that a new PersistentVolume
is created based on your claim:
$ kubectl get pv
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+sc-gold-pvc-13336da3-7ca3-11e9-826c-00505692581f 25Gi RWO Delete Bound default/pvc-gold sc-gold 3s
+
+The above output means that the FlexVolume driver successfully provisioned a new volume and bound to the requesting PVC
to a new PV
. The volume is not attached to any node yet. It will only be attached to a node if a workload is scheduled to a specific node. Now let us create a Pod
that refers to the above volume. When the Pod
is created, the volume will be attached, formatted and mounted to the specified container:
kind: Pod
+apiVersion: v1
+metadata:
+ name: pod-nginx
+spec:
+ containers:
+ - name: nginx
+ image: nginx
+ ports:
+ - containerPort: 80
+ name: "http-server"
+ volumeMounts:
+ - name: export
+ mountPath: "/usr/share/nginx/html"
+ volumes:
+ - name: export
+ persistentVolumeClaim:
+ claimName: sc-gold-pvc
+
+Check if the pod is running successfully:
+
$ kubectl get pod pod-nginx
+NAME READY STATUS RESTARTS AGE
+pod-nginx 1/1 Running 0 2m29s
+
+This StorageClass
examples help guide combinations of options when provisioning volumes.
This StorageClass
will create a snapshot of a "production" volume.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: sc-gold-snap-mongo
+provisioner: hpe.com/hpe
+parameters:
+ virtualCopyOf: "sc-mongo-10dc1195-779b-11e9-b787-0050569bb07c"
+
+This StorageClass
will create clones of a "production" volume.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: sc-gold-clone
+provisioner: hpe.com/hpe
+parameters:
+ cloneOf: "sc-gold-2a82c9e5-6213-11e9-8d53-0050569bb07c"
+
+This StorageClass
will add a standard backend volume to a 3PAR Replication Group. If the replicationGroup specified does not exist, the plugin will create one. See Replication Support for more details on configuring replication.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: sc-mongodb-replicated
+provisioner: hpe.com/hpe
+parameters:
+ provisioning: 'full'
+ replicationGroup: 'mongodb-app1'
+
+This StorageClass
will import an existing 3PAR/Primera volume to Kubernetes. The source volume needs to be offline for the import to succeed.
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: import-clone-legacy-prod
+provisioner: hpe.com/hpe
+parameters:
+ importVol: "production-db-vol"
+
+The HPE Dynamic Provisioner for Kubernetes (doryd) understands a set of annotation keys a user can set on a PVC
. If the corresponding keys exists in the list of the allowOverrides
key in the StorageClass
, the end-user can tweak certain aspects of the provisioning workflow. This opens up for very advanced data services.
StorageClass object:
+
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: sc-gold
+provisioner: hpe.com/hpe
+parameters:
+ provisioning: 'full'
+ cpg: 'SSD_r6'
+ fsOwner: '1001:1001'
+ allowOverrides: provisioning,compression,cpg,fsOwner
+
+PersistentVolumeClaim object:
+
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc
+ annotations:
+ hpe.com/provisioning: "thin"
+ hpe.com/cpg: "FC_r6"
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 25Gi
+ storageClassName: sc-gold
+
+This will create a PV
thinly provisioned using the FC-r6
cpg.
In order to upgrade the driver, simply modify the ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml
used for the initial deployment and modify hpestorage/legacyvolumeplugin
to the latest image from docker hub.
For example:
+
volume_plugin: hpestorage/legacyvolumeplugin:3.3
+
+ Change to:
+ volume_plugin: hpestorage/legacyvolumeplugin:3.3.1
+
+Re-run the installer.
+
$ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml
+
+Run the following to uninstall the FlexVolume driver from the cluster.
+
$ cd ~
+$ cd python-hpedockerplugin/ansible_3par_docker_plugin
+$ ansible-playbook -i hosts uninstall/uninstall_hpe_3par_volume_driver.yml
+
+This section highlights all the available StorageClass
parameters that are supported.
A StorageClass
is used to provision or clone an HPE 3PAR\Primera Storage-backed persistent volume. It can also be used to import an existing HPE 3PAR/Primera Storage volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.
A sample StorageClass is provided.
+Note
+These are optional parameters.
+These parameters are mutable betweeen a parent volume and creating a clone from a snapshot.
+Parameter | +Type | +Options | +Example | +
---|---|---|---|
size | +Integer | +- | +size: "10" | +
provisioning | ++ | thin, full, dedupe | +provisioning: "thin" | +
flash-cache | +Text | +true, false | +flash-cache: "true" | +
compression | +boolean | +true, false | +compression: "true" | +
MountConflictDelay | +Integer | +- | +MountConflictDelay: "30" | +
qos-name | +Text | +vvset name | +qos-name: " |
+
replicationGroup | +Text | +3PAR RCG name | +replicationGroup: "Test-RCG" | +
fsOwner | +userId:groupId | +The user id and group id that should own the root directory of the filesystem. | ++ |
fsMode | +Octal digits | +1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. | ++ |
Either use cloneOf
and reference a PVC in the current namespace or use virtualCopyOf
and reference a 3PAR/Primera volume name to snapshot/clone and import into Kubernetes.
Parameter | +Type | +Options | +Example | +
---|---|---|---|
cloneOf | +Text | +volume name | +cloneOf: "<volume_name>" | +
virtualCopyOf | +Text | +volume name | +virtualCopyOf: "<volume_name>" | +
expirationHours | +Integer | +option of virtualCopyOf | +expirationHours: "10" | +
retentionHours | +Integer | +option of virtualCopyOf | +retentionHours: "10" | +
Importing volumes to Kubernetes requires the source 3PAR/Primera volume to be offline.
+Parameter | +Type | +Description | +Example | +
---|---|---|---|
importVol | +Text | +volume name | +importVol: "<volume_name>" | +
The HPE 3PAR/Primer FlexVolume driver supports array based synchronous and asynchronous replication. In order to enable replication within the FlexVolume driver, the arrays need to be properly zoned, visible to the Kubernetes cluster, and replication configured. For Peer Persistence, a quorum witness will need to be configured.
+Once the replication is enabled at the array level, the FlexVolume driver will need to be configured.
+Important
+Replication support can be enabled during initial deployment through the plugin configuration file. In order to enable replication support post deployment, modify the plugin_configuration_properties.yml used for deployment, add the replication parameter section below, and re-run the Ansible installer.
+Edit the plugin_configuration_properties.yml file and edit the Optional Replication Section.
+
INVENTORY:
+ DEFAULT:
+#Mandatory Parameters-------------------------------------------------------------------------------
+
+ # Specify the port to be used by HPE 3PAR plugin etcd cluster
+ host_etcd_port_number: 23790
+ # Plugin Driver - iSCSI
+ hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
+ hpe3par_ip: <local_3par_ip>
+ hpe3par_username: <local_3par_user>
+ hpe3par_password: <local_3par_password>
+ hpe3par_port: 8080
+ hpe3par_cpg: FC_r6
+
+ # Plugin version - Required only in DEFAULT backend
+ volume_plugin: hpestorage/legacyvolumeplugin:3.3.1
+ # Dory installer version - Required for Openshift/Kubernetes setup
+ dory_installer_version: dory_installer_v32
+
+#Optional Parameters--------------------------------------------------------------------------------
+
+ logging: DEBUG
+ hpe3par_snapcpg: FC_r6
+ use_multipath: False
+ enforce_multipath: False
+
+#Optional Replication Parameters--------------------------------------------------------------------
+ replication_device:
+ backend_id: remote_3PAR
+ #Quorum Witness required for Peer Persistence only
+ #quorum_witness_ip: <quorum_witness_ip>
+ replication_mode: synchronous
+ cpg_map: "local_CPG:remote_CPG"
+ snap_cpg_map: "local_copy_CPG:remote_copy_CPG"
+ hpe3par_ip: <remote_3par_ip>
+ hpe3par_username: <remote_3par_user>
+ hpe3par_password: <remote_3par_password>
+ hpe3par_port: 8080
+ #vlan_tag: False
+
+Once the properties file is configured, you can proceed with the standard installation steps.
+This section outlines a few troubleshooting steps for the HPE 3PAR/Primera FlexVolume driver. This product is supported by HPE, please consult with your support organization prior attempting any configuration changes.
+The FlexVolume driver is a binary executed by the kubelet to perform mount/unmount/attach/detach operations as workloads request storage resources. The binary relies on communicating with a socket on the host where the volume plugin responsible for the MUAD operations perform control-plane or data-plane operations against the backend system hosting the actual volumes.
+The driver has a configuration file where certain defaults can be tweaked to accommodate a certain behavior. Under normal circumstances, this file does not need any tweaking.
+The name and the location of the binary varies based on Kubernetes distribution (the default 'exec' path) and what backend driver is being used. In a typical scenario, using 3PAR/Primera, this is expected:
+/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe
/etc/hpedockerplugin/hpe.conf
To verify the FlexVolume binary can actually communicate with the backend volume plugin, issue a faux mount request:
+
/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe mount no/op '{"name":"myvol1"}'
+
+If the FlexVolume driver can successfully communicate with the volume plugin socket:
+
{"status":"Failure","message":"configured to NOT create volumes"}
+
+In the case of any other output, check if the backend volume plugin is alive:
+
$ docker volume create -d hpe -o help=backends
+
+It should output:
+
=================================
+NAME STATUS
+=================================
+DEFAULT OK
+
+To verify the etcd members on nodes.
+
$ /usr/bin/etcdctl --endpoints http://<Master1-IP>:23790 member list
+
+It should output:
+
b70ca254f54dd23: name=<Worker2-IP> peerURLs=http://<Worker2-IP>:23800 clientURLs=http://<Worker2-IP>:23790 isLeader=true
+236bf7d5cc7a32d4: name=<Worker1-IP> peerURLs=http://<Worker1-IP>:23800 clientURLs=http://<Worker1-IP>:23790 isLeader=false
+445e80419ae8729b: name=<Master1-IP> peerURLs=http://<Master1-IP>:23800 clientURLs=http://<Master1-IP>:23790 isLeader=false
+e340a5833e93861e: name=<Master3-IP> peerURLs=http://<Master3-IP>:23800 clientURLs=http://<Master3-IP>:23790 isLeader=false
+f5b5599d719d376e: name=<Master2-IP> peerURLs=http://<Master2-IP>:23800 clientURLs=http://<Master2-IP>:23790 isLeader=false
+
+Log files associated with the HPE 3PAR/Primera FlexVolume driver logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution. Some of the logs on the host are persisted which follow standard logrotate policies.
+HPE 3PAR/Primera FlexVolume logs: (per node)
+
$ docker logs -f plugin_container
+
+Dynamic Provisioner logs:
+
kubectl logs -f kube-storage-controller-doryd -n kube-system
+
+The logs are persisted at /var/log/hpe-dynamic-provisioner.log
Expired content
+The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.
+This is an umbrella documentation project for the HPE CSI Driver for Kubernetes and neighboring ecosystems for HPE primary storage including HPE Alletra Storage MP, Alletra 9000, Alletra 5000/6000, Nimble Storage, Primera and 3PAR storage systems. The documentation is tailored for IT Ops, developers and technology partners.
+ +Use the navigation on the left-hand side to explore the different topics. Feel free to contribute to this project but please read the contributing guidelines.
+Use the navigation to the left. Not sure what you're looking for? → Get started!
+ + +Did you know?
+SCOD is "docs" in reverse?
+Welcome to the "101" section of SCOD. The goal of this section is to create a learning resource for individuals who want to learn about emerging topics in a cloud native world where containers are the focal point. The content is slightly biased towards storage.
+We aim to provide a learning resource collection that is generic enough to comprehend nuances in the different solutions and paradigms. Hewlett Packard Enterprise Products are highly likely referenced in some examples and resources. We can therefore not claim vendor neutrality nor a Switzerland opinion. External resources are the primary learning assets used to frame certain topics.
+Let's start the learning journey.
+The term "cloud native" stems from a software development model where resources are consumed as services. Compute, network and storage consumed through APIs, CLIs and web administration interfaces. Consumption is often modeled around paying only for what is being used.
+The applications deployed into Cloud Native Computing environments are often divided into small chunks that are operated independently, referred to as microservices. On the uprising is a broader adoption of a concept called serverless where your application runs only when called and is billed in milliseconds.
+Many public cloud vendors provide many already cloud native applications as services on their respective clouds. An example would be to consume a SQL database as a service rather than deploying and managing it by yourself.
+These are some of the key elements of Cloud Native Computing.
+Curated list of learning resources for Cloud Native Computing.
+How to get hands-on experience of Cloud Native Computing.
+Tools to interact with infrastructure and applications come in many shapes and forms. A common pattern is to learn by visually creating and deleting resources to understand an end-state. Once a pattern has been established, either APIs, 3rd party or a custom CLI is used to manage the life-cycle of the deployment in a declarative manner by manipulating RESTful APIs. Also known as Infrastructure-as-Code.
+These are some of the key elements of Cloud Native Computing Tooling.
+Curated list of learning resources for Cloud Native Computing Tooling.
+How to get hands-on experience of Cloud Native Computing Tooling.
+Storage for cloud computing come in many shapes and forms. Compute instances boot off block devices provided by the IaaS through the hypervisor. More devices may be attached for application data to keep host OS and application separate. Most clouds allow these devices to be snapshotted, cloned and reattached to other instances. These block devices are normally offered with different backend media, such as flash or spinning disks. Depending on the use cases and budgets parameters may be tuned to be just right.
+For unstructured workloads, API driven object storage is the dominant technology due to the dramatic difference in cost and simplicity vs cloud provided block storage. An object is uploaded through an API endpoint with HTTP and automatically distributed (highly configurable) to provide high durability. The URL of the object will remain static for the duration of its lifetime. The main prohibitor for object storage adoption is that existing applications relying on POSIX filesystems need to be rewritten.
+These are some of the key elements of Cloud Native Storage.
+Curated list of learning resources for Cloud Native Storage.
+How to get hands-on experience of Cloud Native Storage.
+A container is operating system-level virtualization and has been around for quite some time. By definition, the container share the kernel of the host and relies on certain abstractions to be useful. Docker the company made the technology approachable and incredibly more convenient than any predecessor. In the simplest of forms, a container image contains a virtual filesystem that contains only the dependencies the application needs. An example would be to include the Python interpreter if you wrote a program in Python.
+Containerized applications are primarily designed to run headless. In most cases these applications need to communicate with the outside world or allow inbound traffic depending on the application. Docker containers should be treated as transient, each instance starts in a known state and any data stored inside the virtual filesystem should be treated as ephemeral. This makes it extremely easy and convenient to upgrade and rollback a container.
+If data is required to persist between upgrades and rollbacks of the container, it needs to be stored outside of the container mapped from the host operating system.
+The wide adoption of containers are because they're lightweight, reproducible and run everywhere. Iterations of software delivery lifecycles may be cut down to seconds from weeks with the right processes and tools.
+Container images are layered per change made when the container is built. Each layer has a cryptographic hash and the layer itself can be shared between multiple containers readonly. When a new container is started from an image, the container runtime creates a COW (copy-on-write) filesystem where the particular container data is stored. This is in turn very effective as you only need one copy of a layer on the host. For example, if a bunch of applications are based off a Ubuntu base image, the base image only needs to be stored once on the host.
+These are some of the key elements of Containers.
+Curated list of learning resources for Containers.
+How to get hands-on experience of Containers.
+Most of the tooling around containers is centered around what particular container orchestrator or development environment is being utilized. Usage of the tools differ greatly depending on the role of the user. As an operator the toolkit includes both IaaS and managing the platform to perform upgrades, user management and peripheral services such as storage and ingress load balancers.
+While many popular platforms today are based on Kubernetes, the tooling has nuances. Upstream Kubernetes uses kubectl
, Red Hat OpenShift uses the OpenShift CLI, oc
. With other platforms such as Rancher, nearly all management can be done through a web UI.
These are some of the key elements of Container Tooling.
+docker
and kubectl
CLIs are the two most dominant for low level management.docker-compose
, kompose
and helm
.rke
for Rancher and gkectl
for GKE On-Prem.aws
and gcloud
.Curated list of learning resources for Container Tooling.
+How to get hands-on experience of Container Tooling.
+kubeconfig
file.kubectl get nodes
on your local machine.Pod
using the container image built in previous exercise.Due to the ephemeral nature of a container, storage is predominantly served from the host the container is running on and is dependent on which container runtime is being used where data is stored. In the case of Docker, the overlay filesystems are under /var/lib/docker
. If a certain path inside the container need to persist between upgrades, restarts on a different host or any other operation that will lose the locality of the data, the mount point needs to be replaced with a "bind" mount from the host.
There are also container runtime technologies that are designed to persist the entire container, effectively treating the container more like a long-lived Virtual Machine. Examples are Canonical LXD, WeaveWorks Footloose and HPE BlueData. This is particularly important for applications that rely on its projected node info to remain static throughout its entire lifecycle.
+We can then begin to categorize containers into three main categories based on their lifecycle vs persistence needs.
+Some modern Software-defined Storage solutions are offered to run alongside applications in a distributed fashion. Effectively enforcing multi-way replicas for reliability and eat into CPU and memory resources of the IaaS bill. This also introduces the dilemma of effectively locking the data into the container orchestrator and its compute nodes. Although it's convenient for developers to become self-entitled storage administrators.
+To stay in control of the data and remain mobile, storing data outside of the container orchestrator is preferable. Many container orchestrators provide plugins for external storage, some are built-in and some are supplied and supported by the storage vendor. Public clouds provide storage drivers for their IaaS storage services directly to the container orchestrator. This is widely popular pattern we're also seeing in BYO IaaS solutions such as VMware vSphere.
+These are some of the key elements of Container Storage.
+Curated list of learning resources for Container Storage.
+How to get hands-on experience of Container Storage.
+kubectl get pv -o yaml
and match the Persistent Volume against the IaaS block volumes.There are many interpretations of what DevOps "is". A bird's eye view is that there are people, processes and tools that come together to drive business outcomes through value streams. There are many core principles that could ultimately drive the outcome and no cookie cutter solution for any given organization. Breaking down problems into small pieces and creating safe systems to work in and eliminate toil are some of those principles.
+Agile development and lean manufacturing are both predecessors and role models for driving DevOps principles.
+These are some of the key elements of DevOps.
+Curated list of learning resources for DevOps.
+How to get hands-on experience of DevOps.
+The tools in DevOps are centered around the processes and value streams that support the business. Said tools also promote visibility, openness and collaboration. Inherently following security patterns, audit trails and safety. No one person should be able to misuse one tool to cause major disturbance in a value stream without quick remediation plans.
+Many times CI/CD (Continuous Integration, Continuous Delivery and/or Deployment) is considered synonymous with DevOps. That is both right and wrong. If the value stream inherently contains software, yes.
+These are some of the key elements of DevOps Tooling.
+Curated list of learning resources for DevOps Tooling.
+How to get hands-on experience of DevOps Tooling.
+The common denominator across these platforms is the observability and the ability to limit scope of controls through Role-based Access Control (RBAC). Ensuring the tasks are well-defined, automated, scoped and safe to operate.
+There aren't any particular storage paradigms (file/block/object) that are associated with DevOps. It's the implementation of the application and how it consumes storage that we vaguely may associate with DevOps. It's more of the practice that the right security controls are in place and whomever needs storage resource are fully self serviced. Human or machine.
+These are some of the key elements of DevOps Storage.
+Curated list of learning resources for DevOps Storage.
+How to get hands-on experience of DevOps Storage.
+If you have any suggestions or comments, head over to GitHub and file a PR or leave an issue.
+ +This tutorial was presented at KubeCon North America 2020 Virtual. Content is relevant up to Kubernetes 1.19.
+These are the Asciinema cast files used in the demo. If there's something in the demo you're particularly interested in, copy the text content from these embedded players.
+Source files for the Asciinema cast files and slide deck is available on GitHub.
+ +The recorded CSI workshop available in the Video Gallery is now available on-demand, as a self-paced and interactive workshop hosted by the HPE Developer Community.
+All you have to do is register here.
+A string of e-mails will setup your own sandbox to perform the exercises at your own pace. The environment will have a time restriction before resetting but you should have plenty of time to complete the workshop exercises.
+During the workshop, you'll discover the basics of the Container Storage Interface (CSI) on Kubernetes. Here is a glance at what is being covered:
+StorageClasses
PersistentVolumeClaim
to a workloadPersistentVolumeClaim
Pod
VolumeSnapshot
from a VolumeSnapshotClass
PersistentVolumeClaims
from an existing claim or a VolumeSnapshot
Pod
PersistentVolumeClaims
to leverage StorageClass
overridesReadWriteMany
access mode When completed, please fill out the survey and let us know how we did!
+Happy Hacking!
+ +The Storage Education team at HPE has put together an interactive learning path to introduce field engineers, architects and account executives to Docker and Kubernetes. The course material has an angle to help understand the role of storage in the world of containers. It's a great starting point if you're new to containers.
+Course 2-4 contains interactive labs in an immersive environment with downloadable lab guides that can be used outside of the lab environment.
+It's recommended to take the courses in order.
++ | Audience | +Course name | +Duration (estimated) | +
---|---|---|---|
1 | +AE and SA | +Containers and market opportunity | +20 minutes | +
2 | +AE and SA | +Introduction to containers | +30 minutes | +
3 | +Technical AE and SA | +Introduction to Docker | +45 minutes | +
4 | +Technical AE and SA | +Introduction to Kubernetes | +45 minutes | +
Important
+All courses require a HPE Passport account, either partner or employee.
+This is a free learning resource from HPE which walks you through various exercises to get you familiar with Kubernetes and provisioning Persistent storage using HPE Nimble Storage and HPE Primera storage systems. This guide is by no means a comprehensive overview of the capabilities of Kubernetes but rather a getting started guide for individuals who wants to learn how to use Kubernetes with persistent storage.
+
In Kubernetes, nodes within a cluster pool together their resources (memory and CPU) to distribute workloads. A cluster is comprised of control plane and worker nodes that allow you to run your containerized workloads.
+The Kubernetes control plane is responsible for maintaining the desired state of your cluster. When you interact with Kubernetes, such as by using the kubectl
command-line interface, you’re communicating with your cluster’s Kubernetes API services running on the control plane. Control plane refers to a collection of processes managing the cluster state.
Kubernetes runs your workload by placing containers into Pods
to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods
.
Programs running on Kubernetes are packaged as containers which can run on Linux or Windows. A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
+A Pod
is the basic execution unit of a Kubernetes application–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod
encapsulates an application’s container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run.
Because programs running on your cluster aren’t guaranteed to run on a specific node, data can’t be saved to any arbitrary place in the file system. If a program tries to save data to a file for later, but is then relocated onto a new node, the file will no longer be where the program expects it to be.
+To store data permanently, Kubernetes uses a PersistentVolume
. Local, external storage via SAN arrays, or cloud drives can be attached to the cluster as a PersistentVolume
.
+Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called Namespaces
. Namespaces
are intended for use in environments with many users spread across multiple teams, or projects. Namespaces
are a way to divide cluster resources between multiple users.
A Deployment
provides declarative updates for Pods
. You declare a desired state for your Pods
in your Deployment
and Kubernetes will manage it for you automatically.
A Kubernetes Service
object defines a policy for external clients to access an application within a cluster. By default, the container runtime uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for containers to communicate across nodes, there must be allocated ports on the machine’s own IP address, which are then forwarded or proxied to the containers. Coordinating port allocations is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that Pods
can communicate with other Pods
, regardless of which host they land on. Kubernetes gives every Pod
its own cluster-private IP address, through a Kubernetes Service object, so you do not need to explicitly create links between Pods
or map container ports to host ports. This means that containers within a Pod
can all reach each other’s ports on localhost, and all Pods
in a cluster can see each other without NAT.
All of this information presented here is taken from the official documentation found on kubernetes.io/docs.
+The Kubernetes command-line tool, kubectl
, allows you to run commands against Kubernetes clusters. You can use kubectl
to deploy applications, inspect and manage cluster resources, and view logs. For a complete list of kubectl
operations, see Overview of kubectl on kubernetes.io.
For more information on how to install and setup kubectl
on Linux, Windows or MacOS, see Install and Set Up kubectl on kubernetes.io.
Use the following syntax to run kubectl
commands from your terminal window:
kubectl [command] [TYPE] [NAME] [flags]
where command
, TYPE
, NAME
, and flags
are:
command
: Specifies the operation that you want to perform on one or more resources, for example create, get, describe, delete.
TYPE
: Specifies the resource type. Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:
NAME
: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods
.
Get object example command:
+
kubectl get nodes
+kubectl get node <node_name>
+
+Describe object example command:
+
kubectl describe node <node_name>
+
+Create object example command
+
kubectl create -f <file_name or URL>
+
+The below YAML declarations are meant to be created with kubectl create
. Either copy the content to a file on the host where kubectl
is being executed, or copy & paste into the terminal, like this:
kubectl create -f- (press Enter)
+< paste the YAML >
+(CTRL-D for Linux) or (^D for Mac users)
+
+Kubernetes Cheat Sheet
+Find more available commands at Kubernetes Cheat Sheet on kubernetes.io.
+Let's run through some simple kubectl
commands to get familiar with your cluster.
First we need to open a terminal window, the following commands can be run from a Windows, Linux or Mac. In this guide, we will be using the Window Subsystem for Linux (WSL) which allows us to have a Linux terminal within Windows.
+To start a WSL terminal session, click the Ubuntu icon in the Windows taskbar.
+ +It will open a terminal window. We will be working within this terminal through out this lab.
+ +In order to communicate with the Kubernetes cluster, kubectl
looks for a file named config in the $HOME/.kube
directory. You can specify other kubeconfig
files by setting the KUBECONFIG
environment variable or by setting the --kubeconfig
flag.
You will need to request the kubeconfig
file from your cluster administrator and copy the file to your local $HOME/.kube/
directory. You may need to create this directory.
Once you have the kubeconfig
file, you can view the config file:
kubectl config view
+
+Check that kubectl
and the config file are properly configured by getting the cluster state.
kubectl cluster-info
+
+If you see a URL response, kubectl
is correctly configured to access your cluster.
The output is similar to this:
+
$ kubectl cluster-info
+Kubernetes control plane is running at https://192.168.1.50:6443
+KubeDNS is running at https://192.168.1.50:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
+
+To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
+
+Now let's look at the nodes within our cluster.
+
kubectl get nodes
+
+You should see output similar to below. As you can see, each node has a role control-plane or as worker nodes (<none>).
+
$ kubectl get nodes
+NAME STATUS ROLES AGE VERSION
+kube-group1 Ready control-plane,master 2d18h v1.21.5
+...
+
+You can list pods.
+
kubectl get pods
+
+Quiz
+Did you see any Pods
listed when you ran kubectl get pods
? Why?
If you don't see any Pods
listed, it is because there are no Pods
deployed within the "default" Namespace
. Now run, kubectl get pods --all-namespaces
. Does it look any different?
Pay attention to the first column, NAMESPACES
. In our case, we are working in the "default" Namespace
. Depending on the type of application and your user access level, applications can be deployed within one or more Namespaces
.
If you don't see the object (deployment, pod, services, etc) you are looking for, double-check the Namespace
it was deployed under and use the -n <namespace>
flag to view objects in other Namespaces
.
Once complete, type "Clear" to clear your terminal window.
+A Pod
is a collection of containers sharing a network and mount namespace and is the basic unit of deployment in Kubernetes. All containers in a Pod
are scheduled on the same node. In our first demo we will deploy a stateless application that has no persistent storage attached. Without persistent storage, any modifications done to the application will be lost if that application is stopped or deleted.
Here is a sample NGINX webserver deployment.
+
apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ run: nginx
+ name: first-nginx-pod
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ run: nginx-first-pod
+ template:
+ metadata:
+ labels:
+ run: nginx-first-pod
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+
+Open a WSL terminal session, if you don't have one open already.
+ +At the prompt, we will start by deploying the NGINX example above, by running:
+
kubectl create -f https://scod.hpedev.io/learn/persistent_storage/yaml/nginx-stateless-deployment.yaml
+
+We can see the Deployment
was successfully created and the NGINX Pod
is running.
Note
+The Pod
names will be unique to your deployment.
$ kubectl get deployments.apps
+NAME READY UP-TO-DATE AVAILABLE AGE
+first-nginx-pod 1/1 1 1 38s
+
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+first-nginx-pod-8d7bb985-rrdv8 1/1 Running 0 10s
+
+Important
+In a Deployment
, a Pod
name is generated using the Deployment
name and then a randomized hash (i.e. first-nginx-pod-8d7bb985-kql7t
) to ensure that each Pod
has a unique name. During this lab exercise, make sure to reference the correct object names that are generated in each exercise.
We can inspect the Pod
further using the kubectl describe command.
Note
+You can use tab completion to help with Kubernetes commands and objects. Start typing the first few letters of the command or Kubernetes object (i.e Pod
) name and hit TAB and it should autofill the name.
kubectl describe pod <pod_name>
+
+The output should be similar to this. Note, the Pod
name will be unique to your deployment.
Name: first-nginx-pod-8d7bb985-rrdv8
+Namespace: default
+Priority: 0
+Node: kube-group1/10.90.200.11
+Start Time: Mon, 01 Nov 2021 13:37:59 -0500
+Labels: pod-template-hash=8d7bb985
+ run=nginx-first-pod
+Annotations: cni.projectcalico.org/podIP: 192.168.162.9/32
+ cni.projectcalico.org/podIPs: 192.168.162.9/32
+Status: Running
+IP: 192.168.162.9
+IPs:
+ IP: 192.168.162.9
+Controlled By: ReplicaSet/first-nginx-pod-8d7bb985
+Containers:
+ nginx:
+ Container ID: docker://3610d71c054e6b8fdfffbf436511fda048731a456b9460ae768ae7db6e831398
+ Image: nginx
+ Image ID: docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
+ Port: <none>
+ Host Port: <none>
+ State: Running
+ Started: Mon, 01 Nov 2021 13:38:06 -0500
+ Ready: True
+ Restart Count: 0
+ Environment: <none>
+ Mounts:
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7sbw (ro)
+Conditions:
+ Type Status
+ Initialized True
+ Ready True
+ ContainersReady True
+ PodScheduled True
+Volumes:
+ kube-api-access-w7sbw:
+ Type: Projected (a volume that contains injected data from multiple sources)
+ TokenExpirationSeconds: 3607
+ ConfigMapName: kube-root-ca.crt
+ ConfigMapOptional: <nil>
+ DownwardAPI: true
+QoS Class: BestEffort
+Node-Selectors: <none>
+Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
+ node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Scheduled 5m14s default-scheduler Successfully assigned default/first-nginx-pod-8d7bb985-rrdv8 to kube-group1
+ Normal Pulling 5m13s kubelet Pulling image "nginx"
+ Normal Pulled 5m7s kubelet Successfully pulled image "nginx" in 5.95086952s
+ Normal Created 5m7s kubelet Created container nginx
+ Normal Started 5m7s kubelet Started container nginx
+
+Looking under the "Events" section is a great place to start when checking for issues or errors during Pod
creation.
At this stage, the NGINX application is only accessible from within the cluster. Use kubectl port-forward
to expose the Pod
temporarily outside of the cluster to your workstation.
kubectl port-forward <pod_name> 80:80
+
+The output should be similar to this:
+
kubectl port-forward first-nginx-pod-8d7bb985-rrdv8 80:80
+Forwarding from 127.0.0.1:80 -> 8080
+Forwarding from [::1]:80 -> 8080
+
+Note
+If you have something already running locally on port 80, modify the port-forward
to an unused port (i.e. 5000:80). Port-forward
is meant for temporarily exposing an application outside of a Kubernetes cluster. For a more permanent solution, look into Ingress Controllers.
Finally, open a browser and go to http://127.0.0.1 and you should see the following.
+ +You have successfully deployed your first Kubernetes pod.
+With the Pod
running, you can log in and explore the Pod
.
To do this, open a second terminal, by clicking on the WSL terminal icon again. The first terminal should have kubectl port-forward
still running.
Run:
+
kubectl exec -it <pod_name> -- /bin/bash
+
+You can explore the Pod
and run various commands. Some commands might not be available within the Pod
. Why would that be?
root@first-nginx-pod-8d7bb985-rrdv8:/# df -h
+Filesystem Size Used Avail Use% Mounted on
+overlay 46G 8.0G 38G 18% /
+tmpfs 64M 0 64M 0% /dev
+tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
+/dev/mapper/centos-root 46G 8.0G 38G 18% /etc/hosts
+shm 64M 0 64M 0% /dev/shm
+tmpfs 1.9G 12K 1.9G 1% /run/secrets/kubernetes.io/serviceaccount
+tmpfs 1.9G 0 1.9G 0% /proc/acpi
+tmpfs 1.9G 0 1.9G 0% /proc/scsi
+tmpfs 1.9G 0 1.9G 0% /sys/firmware
+
+
+While inside the container, you can also modify the webpage.
+
echo "<h1>Hello from the HPE Storage Hands on Labs</h1>" > /usr/share/nginx/html/index.html
+
+Now switch back over to the browser and refresh the page (http://127.0.0.1), you should see the updated changes to the webpage.
+Once ready, switch back over to your second terminal, type exit to logout of the NGINX container and close that terminal. Back in your original terminal, use Ctrl+C to exit the port-forwarding.
+Since this is a stateless application, we will now demonstrate what happens if the NGINX Pod
is lost.
To do this, simply delete the Pod
.
kubectl delete pod <pod_name>
+
+Now run kubectl get pods
to see that a new NGINX Pod
has been created.
Lets use kubectl port-forward
again to look at the NGINX application.
kubectl port-forward <new_pod_name> 80:80
+
+Back in your browser, refresh the page (http://127.0.0.1) and you should the webpage has reverted back to its default state.
+ +Back in the terminal, use Ctrl+C to exit the port-forwarding and once ready, type clear to refresh your terminal.
+The NGINX application has reverted back to default because we didn't store the modifications we made to a location that would persist beyond the life of the container. There are many applications where persistence isn't critical (i.e. Google uses stateless containers for your browser web searches) as they perform computations that are either stored into an external database or passed to subsequent processes.
+As mission-critical workloads move into Kubernetes, the need for stateful containers is increasingly important. The following exercises will go through how to provision persistent storage to applications using the HPE CSI Driver for Kubernetes backed by HPE Primera or Nimble Storage.
+To get started with the deployment of the HPE CSI Driver for Kbuernetes, the CSI driver is deployed using industry standard means, either a Helm chart or an Operator. For this tutorial, we will be using Helm to the deploy the HPE CSI driver for Kubernetes.
+The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on Artifact Hub. There, you will find the configuration and installation instructions for the chart.
+Note
+Helm is the package manager for Kubernetes. Software is delivered in a format called a "chart". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG
file.
Open a WSL terminal session, if you don't have one open already.
+ +To install the chart with the name my-hpe-csi-driver
, add the HPE CSI Driver for Kubernetes Helm repo.
helm repo add hpe-storage https://hpe-storage.github.io/co-deployments
+helm repo update
+
+Install the latest chart.
+
kubectl create ns hpe-storage
+helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage
+
+Wait a few minutes as the deployment finishes.
+Verify that everything is up and running correctly by listing out the Pods
.
kubectl get pods -n hpe-storage
+
+The output is similar to this:
+Note
+The Pod
names will be unique to your deployment.
$ kubectl get pods -n hpe-storage
+NAME READY STATUS RESTARTS AGE
+pod/hpe-csi-controller-6f9b8c6f7b-n7zcr 9/9 Running 0 7m41s
+pod/hpe-csi-node-npp59 2/2 Running 0 7m41s
+pod/nimble-csp-5f6cc8c744-rxgfk 1/1 Running 0 7m41s
+pod/primera3par-csp-7f78f498d5-4vq9r 1/1 Running 0 7m41s
+
+If all of the components show in Running state, then the HPE CSI Driver for Kubernetes and the corresponding Container Storage Providers (CSP) for HPE Alletra, Primera and Nimble Storage have been successfully deployed.
+Important
+With the HPE CSI Driver deployed, the rest of this guide is designed to demonstrate the usage of the CSI driver with HPE Primera or Nimble Storage. You will need to choose which storage system (HPE Primera or Nimble Storage) to use for the rest of the exercises. While the HPE CSI Driver supports connectivity to multiple backends, configurating multiple backends is outside of the scope of this lab guide.
+Once the HPE CSI Driver has been deployed, a Secret
needs to be created in order for the CSI driver to communicate to the HPE Primera or Nimble Storage. This Secret
, which contains the storage system IP and credentials, is used by the CSI driver sidecars within the StorageClass
to authenticate to a specific backend for various CSI operations. For more information, see adding an HPE storage backend
Here is an example Secret
.
apiVersion: v1
+kind: Secret
+metadata:
+ name: custom-secret
+ namespace: hpe-storage
+stringData:
+ serviceName: primera3par-csp-svc
+ servicePort: "8080"
+ backend: 10.10.0.2
+ username: <user>
+ password: <password>
+
+Download and modify, using the text editor of your choice, the Secret
file with the backend IP per your environment.
wget https://raw.githubusercontent.com/hpe-storage/scod/master/docs/learn/persistent_storage/yaml/nimble-secret.yaml
+
wget https://raw.githubusercontent.com/hpe-storage/scod/master/docs/learn/persistent_storage/yaml/primera-secret.yaml
+
Save the file and create the Secret
within the cluster.
kubectl create -f nimble-secret.yaml
+
kubectl create -f primera-secret.yaml
+
The Secret
should now be available in the "hpe-storage" Namespace
:
kubectl -n hpe-storage get secret/custom-secret
+NAME TYPE DATA AGE
+custom-secret Opaque 5 1m
+
+If you made a mistake when creating the Secret
, simply delete the object (kubectl -n hpe-storage delete secret/custom-secret
) and repeat the steps above.
Now we will create a StorageClass
that will be used in the following exercises. A StorageClass
(SC) specifies which storage provisioner to use (in our case the HPE CSI Driver) and the volume parameters (such as Protection Templates, Performance Policies, CPG, etc.) for the volumes that we want to create which can be used to differentiate between storage levels and usages.
This concept is sometimes called “profiles” in other storage systems. A cluster can have multiple StorageClasses
allowing users to create storage claims tailored for their specific application requirements.
We will start by creating a StorageClass
called hpe-standard. We will use the custom-secret created in the previous step and specify the hpe-storage namespace
where the CSI driver was deployed.
Here is an example StorageClasses
for HPE Primera and Nimble Storage systems and some of the available volume parameters that can be defined. See the respective CSP for more elaborate examples.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-standard
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/provisioner-secret-name: custom-secret
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: custom-secret
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: custom-secret
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: custom-secret
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-expand-secret-name: custom-secret
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ performancePolicy: "SQL Server"
+ description: "Volume from HPE CSI Driver"
+ accessProtocol: iscsi
+ limitIops: "76800"
+ allowOverrides: description,limitIops,performancePolicy
+allowVolumeExpansion: true
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-standard
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/provisioner-secret-name: custom-secret
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: custom-secret
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: custom-secret
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: custom-secret
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-expand-secret-name: custom-secret
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ cpg: SSD_r6
+ provisioningType: tpvv
+ accessProtocol: iscsi
+ allowOverrides: cpg,provisioningType
+allowVolumeExpansion: true
+
Create the StorageClass
within the cluster
kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/nimble-storageclass.yaml
+
kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/primera-storageclass.yaml
+
We can verify the StorageClass
is now available.
kubectl get sc
+NAME PROVISIONER AGE
+hpe-standard (default) csi.hpe.com 2m
+
+Note
+You can create multiple StorageClasses
to match the storage requirements of your applications. We set hpe-standard StorageClass
as default using the annotation storageclass.kubernetes.io/is-default-class: "true"
. There can only be one default StorageClass
per cluster, for any additional StorageClasses
set this to false. To learn more about configuring a default StorageClass
, see Default StorageClass on kubernetes.io.
With the HPE CSI Driver for Kubernetes deployed and a StorageClass
available, we can now provision persistent volumes.
A PersistentVolumeClaim
(PVC) is a request for storage by a user. Claims can request storage of a specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany). The accessMode
will be dependent on the type of storage system and the application requirements. Block storage like HPE Primera and Nimble Storage, provision volumes using ReadWriteOnce
access mode where the volume can only be mounted to a single node within the cluster at a time. Any applications running on that node can access that volume. Applications deployed across multiple nodes within a cluster that require shared access (ReadWriteMany
) to the same PersistentVolume
will need to use NFS or a distribute storage system such as MapR, Gluster or Ceph.
A PersistentVolume
(PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes
.
With a StorageClass
available, we can request an amount of storage for our application using a PersistentVolumeClaim
. Here is a sample PVC
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 50Gi
+
+Note
+We don't have a StorageClass
(SC) explicitly defined within this PVC
therefore it will use the default StorageClass
. You can use spec.storageClassName
to override the default SC
with another one available to the cluster.
Create the PersistentVolumeClaim
.
kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/my-pvc.yaml
+
+We can see the my-pvc PersistentVolumeClaim
was created.
kubectl get pvc
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+my-pvc Bound pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 50Gi RWO hpe-standard 72m
+
+Note
+The Persistent Volume
name is a randomly generated name by Kubernetes. For consistent naming for your stateful applications, check out StatefulSet deployment model. These names can be used to track the volume back to the storage system. It is important to note that HPE Primera has a 30 character limit on volume names therefore the name will be truncated. For example: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8
will be truncated to pvc-70d5caf8-7558-40e6-a8b7-77d
on an HPE Primera system.
We can inspect the PVC
further for additional information including event logs for troubleshooting.
kubectl describe pvc my-pvc
+
+Check the Events section to see if there were any issues during creation.
+The output is similar to this:
+
$ kubectl describe pvc my-pvc
+Name: my-pvc
+Namespace: default
+StorageClass: hpe-standard
+Status: Bound
+Volume: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8
+Labels: <none>
+Annotations: pv.kubernetes.io/bind-completed: yes
+ pv.kubernetes.io/bound-by-controller: yes
+ volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com
+Finalizers: [kubernetes.io/pvc-protection]
+Capacity: 50Gi
+Access Modes: RWO
+VolumeMode: Filesystem
+Mounted By: <none>
+Events: <none>
+
+We can also inspect the PersistentVolume
(PV) in a similar manner. Note, the volume name will be unique to your deployment.
kubectl describe pv <volume_name>
+
+The output is similar to this:
+
$ kubectl describe pv pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8
+Name: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8
+Labels: <none>
+Annotations: pv.kubernetes.io/provisioned-by: csi.hpe.com
+Finalizers: [kubernetes.io/pv-protection]
+StorageClass: hpe-standard
+Status: Bound
+Claim: default/my-pvc
+Reclaim Policy: Delete
+Access Modes: RWO
+VolumeMode: Filesystem
+Capacity: 50Gi
+Node Affinity: <none>
+Message:
+Source:
+ Type: CSI (a Container Storage Interface (CSI) volume source)
+ Driver: csi.hpe.com
+ VolumeHandle: 063aba3d50ec99d866000000000000000000000001
+ ReadOnly: false
+ VolumeAttributes: accessProtocol=iscsi
+ allowOverrides=description,limitIops,performancePolicy
+ description=Volume from HPE CSI Driver
+ fsType=xfs
+ limitIops=76800
+ performancePolicy=SQL Server
+ storage.kubernetes.io/csiProvisionerIdentity=1583271972595-8081-csi.hpe.com
+ volumeAccessMode=mount
+Events: <none>
+
+With the describe
command, you can see the volume parameters used to create this volume. In this case, Nimble Storage parameters performancePolicy
, limitIops
, etc.
Important
+If the PVC is stuck in Pending state, double check the Secret
and Namespace
are correct within the StorageClass
(sc) and that the volume parameters are valid. If necessary delete the object (sc or pvc) (kubectl delete <object_type> <object_name>
) and repeat the steps above.
Let's recap what we have learned.
+StorageClass
for our volumes.PVC
that created a volume from the storageClass.StorageClass
, PVC
and PV
.StorageClass
, PVC
or PV
At this point, we have validated the deployment of the HPE CSI Driver and are ready to deploy an application with persistent storage.
+To begin, we will create two PersistentVolumes
for the WordPress application using the default hpe-standard StorageClass
we created previously. If you don't have the hpe-standard StorageClass
available, please refer to the StorageClass section for instructions on creating a StorageClass
.
Create a PersistentVolumeClaim
for the MariaDB database that will used by WordPress.
kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/wordpress-mariadb-pvc.yaml
+
+Next let's make another volume for the WordPress application.
+
kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/my-wordpress-pvc.yaml
+
+Now verify the PersistentVolumes
were created successfully. The output should be similar to the following. Note, the volume names will be unique to your deployment.
kubectl get pvc
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+data-my-wordpress-mariadb-0 Bound pvc-1abdb7d7-374e-45b3-8fa1-534131ec7ec6 50Gi RWO hpe-standard 1m
+my-wordpress Bound pvc-ff6dc8fd-2b14-4726-b608-be8b27485603 20Gi RWO hpe-standard 1m
+
+The above output means that the HPE CSI Driver has successfully provisioned two volumes based upon the default hpe-standard StorageClass
. At this stage, the volumes are not attached (exported) to any nodes yet. They will only be attached (exported) to a node once a scheduled workload requests the PersistentVolumeClaims
.
We will use Helm again to deploy WordPress using the PersistentVolumeClaims
we just created. When WordPress is deployed, the volumes will be attached, formatted and mounted.
The first step is to add the WordPress chart to Helm. The output should be similar to below.
+
helm repo add bitnami https://charts.bitnami.com/bitnami
+helm repo update
+helm search repo bitnami/wordpress
+NAME CHART VERSION APP VERSION DESCRIPTION
+bitnami/wordpress 11.0.13 5.7.2 Web publishing platform for building blogs and ...
+
+Next, deploy WordPress by setting the deployment parameter persistence.existingClaim=<existing_PVC>
to the PVC
my-wordpress created in the previous step.
helm install my-wordpress bitnami/wordpress --version 9.2.1 --set service.type=ClusterIP,wordpressUsername=admin,wordpressPassword=adminpassword,mariadb.mariadbRootPassword=secretpassword,persistence.existingClaim=my-wordpress,allowEmptyPassword=false
+
+Check to verify that WordPress and MariaDB were deployed and are in the Running state. This may take a few minutes.
+Note
+The Pod
names will be unique to your deployment.
kubectl get pods
+NAME READY STATUS RESTARTS AGE
+my-wordpress-69b7976c85-9mfjv 1/1 Running 0 2m
+my-wordpress-mariadb-0 1/1 Running 0 2m
+
+Finally take a look at the WordPress site. Again, we can use kubectl port-forward
to access the WordPress application and verify everything is working correctly.
kubectl port-forward svc/my-wordpress 80:80
+
+Note
+If you have something already running locally on port 80, modify the port-forward to an unused port (i.e. 5000:80).
+Open a browser on your workstation to http://127.0.0.1 and you should see, "Hello World!".
+Access the admin console at: http://127.0.0.1/admin using the "admin/adminpassword" we specified when deploying the Helm Chart.
+ +Create a new blog post so you have data stored in the WordPress application.
+Happy Blogging!
+Once ready, hit "Ctrl+C" in your terminal to stop the port-forward
.
Verify the Wordpress application is using the my-wordpress and data-my-wordpress-mariadb-0 PersistentVolumeClaims
.
kubectl get pods -o=jsonpath='{.items[*].spec.volumes[*].persistentVolumeClaim.claimName}'
+
+With the WordPress application using persistent storage for the database and the application data, in the event of a crash of the WordPress application, the PVC
will be remounted to the new Pod
.
Delete the WordPress Pod
.
kubectl delete pod <my-wordpress_pod_name>
+
+For example.
+
$ kubectl delete pod my-wordpress-69b7976c85-9mfjv
+pod "my-wordpress-69b7976c85-9mfjv" deleted
+
+Now if run kubectl get pods
and you should see the WordPress Pod
recreating itself with a new name. This may take a few minutes.
Output should be similar to the following as the WordPress container is recreating.
+
$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+my-wordpress-mariadb-0 1/1 Running 1 10m
+my-wordpress-7856df6756-m2nw8 0/1 ContainerCreating 0 33s
+
+Once the WordPress Pod
is in Ready
state, we can verify that the Wordpress application is still using the my-wordpress and data-my-wordpress-mariadb-0 PersistentVolumeClaims
.
kubectl get pods -o=jsonpath='{.items[*].spec.volumes[*].persistentVolumeClaim.claimName}'
+
+And finally, run kubectl port-forward
again to see the changes made to the WordPress application survived deleting the application Pod
.
kubectl port-forward svc/my-wordpress 80:80
+
+Open a browser on your workstation to http://127.0.0.1 and you should see your WordPress site running.
+This completes the tutorial of using the HPE CSI Driver with HPE storage to create Persistent Volumes within Kubernetes. This is just the beginning of the capabilities of the HPE Storage integrations within Kubernetes. We recommend exploring SCOD further and the specific HPE Storage CSP (Nimble, Primera, and 3PAR) to learn more.
+It's not uncommon to have multiple HPE primary storage systems within the same environment, either the same family or different ones. This section walks through the scenario of managing multiple StorageClass
and Secret
API objects to represent an environment with multiple systems.
To view the current Secrets
in the hpe-storage Namespace
(assuming default names):
kubectl -n hpe-storage get secret
+NAME TYPE DATA AGE
+custom-secret Opaque 5 10m
+
+This Secret
is used by the CSI sidecars in the StorageClass
to authenticate to a specific backend for CSI operations. In order to add a new Secret
or manage access to multiple backends, additional Secrets
will need to be created per backend.
In the previous steps, if you connected to Nimble Storage, create a new Secret
for the Primera array or if you connected to Primera array above then create a Secret
for the Nimble Storage.
Secret Requirements
+Secret
name must be unique.Using your text editor of choice, create a new Secret
, specify the name, Namespace
, backend username, backend password and the backend IP address to be used by the CSP and save it as gold-secret.yaml
.
apiVersion: v1
+kind: Secret
+metadata:
+ name: gold-secret
+ namespace: hpe-storage
+stringData:
+ serviceName: nimble-csp-svc
+ servicePort: "8080"
+ backend: 192.168.1.2
+ username: admin
+ password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: gold-secret
+ namespace: hpe-storage
+stringData:
+ serviceName: primera3par-csp-svc
+ servicePort: "8080"
+ backend: 10.10.0.2
+ username: 3paradm
+ password: 3pardata
+
Create the Secret
using kubectl
:
kubectl create -f gold-secret.yaml
+
+You should now see the Secret
in the "hpe-storage" Namespace
:
kubectl -n hpe-storage get secret
+NAME TYPE DATA AGE
+gold-secret Opaque 5 1m
+custom-secret Opaque 5 15m
+
+To use the new gold-secret
, create a new StorageClass
using the Secret
and the necessary StorageClass
parameters. Please see the requirements section of the respective CSP.
We will start by creating a StorageClass
called hpe-gold. We will use the gold-secret
created in the previous step and specify the hpe-storage Namespace
where the CSI driver was deployed.
Note
+Please note that at most one StorageClass
can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim
without storageClassName
explicitly specified cannot be created.
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-gold
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/provisioner-secret-name: gold-secret
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: gold-secret
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: gold-secret
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: gold-secret
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-expand-secret-name: gold-secret
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ performancePolicy: "SQL Server"
+ description: "Volume from HPE CSI Driver"
+ accessProtocol: iscsi
+ limitIops: "76800"
+ allowOverrides: description,limitIops,performancePolicy
+allowVolumeExpansion: true
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-gold
+provisioner: csi.hpe.com
+parameters:
+ csi.storage.k8s.io/fstype: xfs
+ csi.storage.k8s.io/provisioner-secret-name: gold-secret
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: gold-secret
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-stage-secret-name: gold-secret
+ csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: gold-secret
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-expand-secret-name: gold-secret
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ cpg: SSD_r6
+ provisioningType: tpvv
+ accessProtocol: iscsi
+ allowOverrides: cpg,provisioningType
+allowVolumeExpansion: true
+
We can verify the StorageClass is now available.
+
kubectl get sc
+NAME PROVISIONER AGE
+hpe-standard (default) csi.hpe.com 15m
+hpe-gold csi.hpe.com 1m
+
+Note
+Don't forget to call out the StorageClass
explicitly when creating PVCs
from non-default StorageClasses
.
With a StorageClass
available, we can request an amount of storage for our application using a PersistentVolumeClaim
. Using your text editor of choice, create a new PVC
and save it as gold-pvc.yaml
.
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: gold-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 50Gi
+ storageClassName: hpe-gold
+
+Create the PersistentVolumeClaim
.
kubectl create -f gold-pvc.yaml
+
+We can see the my-pvc PersistentVolumeClaim
was created.
kubectl get pvc
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+my-pvc Bound pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 50Gi RWO hpe-standard 72m
+gold-pvc Bound pvc-7a74d656-0b14-42a2-9437-e374a5d3bd68 50Gi RWO hpe-gold 1m
+
+You can see that the new PVC
is using the new StorageClass
which is backed by the additional storage backend allowing you to add additional flexibility to your containerized workloads and match the persistent storage requirements to the application.
As others will be using this lab at a later time, we can clean up the objects that were deployed during this lab exercise.
+Note
+These steps may take a few minutes to complete. Please be patient and don't cancel out the process.
+Remove WordPress & NGINX deployments.
+
helm uninstall my-wordpress && kubectl delete all --all
+
+Delete the PersistentVolumeClaims
and related objects.
kubectl delete pvc --all && kubectl delete sc --all
+
+Remove the HPE CSI Driver for Kubernetes.
+
helm uninstall my-hpe-csi-driver -n hpe-storage
+
+It takes a couple minutes to cleanup the objects from the CSI driver. You can check the status:
+
watch kubectl get all -n hpe-storage
+
+Once everything is removed, Ctrl+C to exit and finally you can remove the Namespace
.
kubectl delete ns hpe-storage
+
+
+ Welcome to the Video Gallery. This is a collection of current YouTube assets that pertains to supported HPE primary storage container technologies.
+How to manage the components that surrounds driver deployment.
+This tutorial talks about managing multiple Secrets
and StorageClasses
to distinguish different backends.
Each CSP has its own features and perks, learn about the different platforms right here.
+This tutorial showcases a few of the HPE Primera specific features with the HPE CSI Driver.
+ + + +Learn how to configure HPE Primera Peer Persistence using the HPE CSI Driver.
+ + + +This tutorial showcases a few of the HPE Nimble Storage specific features with the HPE CSI Driver.
+ + + +This lightboard video discusses the advantages of using HPE Alletra 5000/6000 or Nimble Storage to handle multitenancy for storage resources between Kubernetes clusters.
+ + + +The provisioning topic covers provisioning of storage resources on container orchestrators, such as volumes, snapshots and clones.
+Learn the fundamentals of storage provisioning on Kubernetes.
+ + + +An interactive CSI workshop from HPE Discover Virtual Experience. It explains key provisioning concepts, including CSI snapshots and clones, ephemeral inline volumes, raw block volumes and how to use the NFS server provisioner.
+ + + +Learn how to use CSI snapshots and clones with the HPE CSI Driver.
+ + + +Explore how to take advantage of the HPE CSI Driver's exclusive features VolumeGroups
and SnapshotGroups
.
Learn how to use volume mutations to adapt stateful workloads with the HPE CSI Driver.
+ + + +Joint solutions with our revered ecosystem partners.
+This tutorial explains how to deploy the necessary components for Kasten K10 and how to perform snapshots and restores using the HPE CSI Driver.
+ + + +This tutorial goes through the steps of installing the HPE CSI Operator on Red Hat OpenShift.
+ + + +This tutorial shows how to use HPE storage with VMware Tanzu as well as how to configure the vSphere CSI Driver for Kubernetes clusters running on VMware leveraging HPE storage.
+ + + +Tutorials and demos showcasing monitoring and troubleshooting.
+Learn how to stand up a Prometheus and Grafana environment on Kubernetes and start using the HPE Storage Array Exporter for Prometheus and the HPE CSI Info Metrics Provider for Prometheus to provide Monitoring and Alerting.
+ + + +This lightboard video discusses how to lift and transform applications running on traditional infrastructure over to Kubernetes using the HPE CSI Driver. Learn the details on what makes this possible in this HPE Developer blog post.
+ + + +A curated playlist of content related to HPE primary storage and containers is available on YouTube.
+ +We welcome and encourage community contributions to SCOD.
+The best way to directly collaborate with the project contributors is through GitHub: https://github.com/hpe-storage/scod
+Before you start writing, we recommend discussing your plans through a GitHub issue, especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your contribution, and help you find out if someone else is working on the same thing.
+Note that all submissions from all contributors get reviewed. +After a pull request is made, other contributors will offer feedback. If the patch passes review, a maintainer will accept it with a comment. +When a pull request fails review, the author is expected to update the pull request to address the issue until it passes review and the pull request merges successfully.
+At least one review from a maintainer is required for all patches.
+All contributions must include acceptance of the DCO:
+++Developer Certificate of Origin Version 1.1
+Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 660 +York Street, Suite 102, San Francisco, CA 94110 USA
+Everyone is permitted to copy and distribute verbatim copies of this +license document, but changing it is not allowed.
+Developer's Certificate of Origin 1.1
+By making a contribution to this project, I certify that:
+(a) The contribution was created in whole or in part by me and I have +the right to submit it under the open source license indicated in the +file; or
+(b) The contribution is based upon previous work that, to the best of my +knowledge, is covered under an appropriate open source license and I +have the right under that license to submit that work with +modifications, whether created in whole or in part by me, under the same +open source license (unless I am permitted to submit under a different +license), as indicated in the file; or
+(c) The contribution was provided directly to me by some other person +who certified (a), (b) or (c) and I have not modified it.
+(d) I understand and agree that this project and the contribution are +public and that a record of the contribution (including all personal +information I submit with it, including my sign-off) is maintained +indefinitely and may be redistributed consistent with this project or +the open source license(s) involved.
+
To accept the DCO, simply add this line to each commit message with your
+name and email address (git commit -s
will do this for you):
Signed-off-by: Jane Example <jane@example.com>
+
+For legal reasons, no anonymous or pseudonymous contributions are +accepted.
+We encourage and support contributions from the community. No fix is too +small. We strive to process all pull requests as soon as possible and +with constructive feedback. If your pull request is not accepted at +first, please try again after addressing the feedback you received.
+To make a pull request you will need a GitHub account. For help, see +GitHub's documentation on forking and pull requests.
+ +
Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+ Attributions for third party components.
+ +
HPE CSI Info Metrics Provider for Prometheus
+Copyright 2020-2024 Hewlett Packard Enterprise Development LP
+
+This product contains the following third party componenets:
+
+Google Cloud Go
+cloud.google.com/go
+Licensed under the Apache-2.0 license
+
+mtl
+dmitri.shuralyov.com/gpu/mtl
+Licensed under the BSD-3-Clause license
+
+go-autorest
+github.com/Azure/go-autorest
+Licensed under the Apache-2.0 license
+
+Tom's Obvious Minimal Language
+github.com/BurntSushi/toml
+Licensed under the MIT license
+
+X Go Binding
+github.com/BurntSushi/xgb
+Licensed under the BSD-3-Clause license
+
+Gzip Handler
+github.com/NYTimes/gziphandler
+Licensed under the Apache-2.0 license
+
+Purell
+github.com/PuerkitoBio/purell
+Licensed under the BSD-3-Clause license
+
+urlesc
+github.com/PuerkitoBio/urlesc
+Licensed under the BSD-3-Clause license
+
+text/template
+github.com/alecthomas/template
+Licensed under the BSD-3-Clause license
+
+Units
+github.com/alecthomas/units
+Licensed under the MIT license
+
+govalidator
+github.com/asaskevich/govalidator
+Licensed under the MIT license
+
+Perks for Go
+github.com/beorn7/perks
+Licensed under the MIT license
+
+OpenCensus Proto
+github.com/census-instrumentation/opencensus-proto
+Licensed under the Apache-2.0 license
+
+xxhash
+github.com/cespare/xxhash/v2
+Licensed under the MIT license
+
+Logex
+github.com/chzyer/logex
+Licensed under the MIT license
+
+ReadLine
+github.com/chzyer/readline
+Licensed under the MIT license
+
+test
+github.com/chzyer/test
+Licensed under the MIT license
+
+misspell
+github.com/client9/misspell
+Licensed under the MIT license
+
+pty
+github.com/creack/pty
+Licensed under the MIT license
+
+go-spew
+github.com/davecgh/go-spew
+Licensed under the ISC license
+
+docopt-go
+github.com/docopt/docopt-go
+Licensed under the MIT license
+
+goproxy
+github.com/elazarl/goproxy
+Licensed under the BSD-3-Clause license
+
+go-restful
+github.com/emicklei/go-restful
+Licensed under the MIT license
+
+control-plane
+github.com/envoyproxy/go-control-plane
+Licensed under the Apache-2.0 license
+
+protec-gen-validate (PGV)
+github.com/envoyproxy/protoc-gen-validate
+Licensed under the Apache-2.0 license
+
+JSON-Patch
+github.com/evanphx/json-patch
+Licensed under the BSD-3-Clause license
+
+jwt-go
+github.com/form3tech-oss/jwt-go
+Licensed under the MIT license
+
+File system notifications for Go
+github.com/fsnotify/fsnotify
+Licensed under the BSD-3-Clause license
+
+GLFW for Go
+github.com/go-gl/glfw
+Licensed under the BSD-3-Clause license
+
+Go kit
+github.com/go-kit/kit
+Licensed under the MIT license
+
+package log
+github.com/go-kit/log
+Licensed under the MIT license
+
+logfmt
+github.com/go-logfmt/logfmt
+Licensed under the MIT license
+
+logr, A minimal logging API for Go
+github.com/go-logr/logr
+Licensed under the Apache-2.0 license
+
+gojsonpointer
+github.com/go-openapi/jsonpointer
+Licensed under the Apache-2.0 license
+
+gojsonreference
+github.com/go-openapi/jsonreference
+Licensed under the Apache-2.0 license
+
+OAI object model
+github.com/go-openapi/spec
+Licensed under the Apache-2.0 license
+
+Swag
+github.com/go-openapi/swag
+Licensed under the Apache-2.0 license
+
+stack
+github.com/go-stack/stack
+Licensed under the MIT license
+
+Protocol Buffers for Go with Gadgets
+github.com/gogo/protobuf
+Licensed under the BSD-3-Clause license
+
+glog
+github.com/golang/glog
+Licensed under the Apache-2.0 license
+
+groupcache
+github.com/golang/groupcache
+Licensed under the Apache-2.0 license
+
+gomock
+github.com/golang/mock
+Licensed under the Apache-2.0 license
+
+Go support for Protocol Buffers
+github.com/golang/protobuf
+Licensed under the BSD-3-Clause license
+
+BTree implementation for Go
+github.com/google/btree
+Licensed under the Apache-2.0 license
+
+Package for equality of Go values
+github.com/google/go-cmp
+Licensed under the BSD-3-Clause license
+
+gofuzz
+github.com/google/gofuzz
+Licensed under the Apache-2.0 license
+
+Martian Proxy
+github.com/google/martian
+Licensed under the Apache-2.0 license
+
+pprof
+github.com/google/pprof
+Licensed under the Apache-2.0 license
+
+renameio
+github.com/google/renameio
+Licensed under the Apache-2.0 license
+
+uuid
+github.com/google/uuid
+Licensed under the BSD-3-Clause license
+
+Google API Extensions for Go
+github.com/googleapis/gax-go/v2
+Licensed under the BSD-3-Clause license
+
+gnostic
+github.com/googleapis/gnostic
+Licensed under the Apache-2.0 license
+
+Gorilla WebSocket
+github.com/gorilla/websocket
+Licensed under the BSD-2-Clause license
+
+httpcache
+github.com/gregjones/httpcache
+Licensed under the MIT license
+
+golang-lru
+github.com/hashicorp/golang-lru
+Licensed under the MPL-2.0 license
+
+Go package for tail-ing files
+github.com/hpcloud/tail
+Licensed under the MIT license
+
+demangle
+github.com/ianlancetaylor/demangle
+Licensed under the BSD-3-Clause license
+
+Mergo
+github.com/imdario/mergo
+Licensed under the BSD-3-Clause license
+
+Backoff
+github.com/jpillora/backoff
+Licensed under the MIT license
+
+json-iterator
+github.com/json-iterator/go
+Licensed under the MIT license
+
+go-junit-report
+github.com/jstemmer/go-junit-report
+Licensed under the MIT license
+
+errcheck
+github.com/kisielk/errcheck
+Licensed under the MIT license
+
+gotool
+github.com/kisielk/gotool
+Licensed under the MIT license
+
+Windows Terminal Sequences
+github.com/konsorten/go-windows-terminal-sequences
+Licensed under the MIT license
+
+logfmt
+github.com/kr/logfmt
+Licensed under the MIT license
+
+pretty
+github.com/kr/pretty
+Licensed under the MIT license
+
+pty
+github.com/kr/pty
+Licensed under the MIT license
+
+text
+github.com/kr/text
+Licensed under the MIT license
+
+easyjson
+github.com/mailru/easyjson
+Licensed under the MIT license
+
+golang protobuf extensions
+github.com/matttproud/golang_protobuf_extensions
+Licensed under the Apache-2.0 license with the notice:
+Copyright 2012 Matt T. Proud (matt.proud@gmail.com)
+
+mapstructure
+github.com/mitchellh/mapstructure
+Licensed under the MIT license
+
+SpdyStream
+github.com/moby/spdystream
+Licensed under the Apache-2.0 license with the notice:
+SpdyStream
+Copyright 2014-2021 Docker Inc.
+
+This product includes software developed at
+Docker Inc. (https://www.docker.com/).
+
+concurrent
+github.com/modern-go/concurrent
+Licensed under the Apache-2.0 license
+
+reflect2
+github.com/modern-go/reflect2
+Licensed under the Apache-2.0 license
+
+goautoneg
+github.com/munnerz/goautoneg
+Licensed under the BSD-3-Clause license
+
+Go tracing and monitoring (Prometheus) for net.Conn
+github.com/mwitkow/go-conntrack
+Licensed under the Apache-2.0 license
+
+Data Flow Rate Control
+github.com/mxk/go-flowrate
+Licensed under the BSD-3-Clause license
+
+pretty
+github.com/niemeyer/pretty
+Licensed under the MIT license
+
+Ginkgo
+github.com/onsi/ginkgo
+Licensed under the MIT license
+
+Gomega
+github.com/onsi/gomega
+Licensed under the MIT license
+
+diskv
+github.com/peterbourgon/diskv
+Licensed under the MIT license
+
+errors
+github.com/pkg/errors
+Licensed under the BSD-2-Clause license
+
+go-difflib
+github.com/pmezard/go-difflib
+Licensed under the BSD-3-Clause license
+
+Prometheus Go client library
+github.com/prometheus/client_golang
+Licensed under the Apache-2.0 license with the following notice:
+Prometheus instrumentation library for Go applications
+Copyright 2012-2015 The Prometheus Authors
+
+This product includes software developed at
+SoundCloud Ltd. (http://soundcloud.com/).
+
+
+The following components are included in this product:
+
+perks - a fork of https://github.com/bmizerany/perks
+https://github.com/beorn7/perks
+Copyright 2013-2015 Blake Mizerany, Björn Rabenstein
+See https://github.com/beorn7/perks/blob/master/README.md for license details.
+
+Go support for Protocol Buffers - Google's data interchange format
+http://github.com/golang/protobuf/
+Copyright 2010 The Go Authors
+See source code for license details.
+
+Support for streaming Protocol Buffer messages for the Go language (golang).
+https://github.com/matttproud/golang_protobuf_extensions
+Copyright 2013 Matt T. Proud
+Licensed under the Apache License, Version 2.0
+
+Prometheus Go client model
+github.com/prometheus/client_model
+Licensed under the Apache-2.0 license with the following notice:
+Data model artifacts for Prometheus.
+Copyright 2012-2015 The Prometheus Authors
+
+This product includes software developed at
+SoundCloud Ltd. (http://soundcloud.com/).
+
+Common
+github.com/prometheus/common
+Licensed under the Apache-2.0 license with the following notice:
+Common libraries shared by Prometheus Go components.
+Copyright 2015 The Prometheus Authors
+
+This product includes software developed at
+SoundCloud Ltd. (http://soundcloud.com/).
+
+procfs
+github.com/prometheus/procfs
+Licensed under the Apache-2.0 license with the following notice:
+procfs provides functions to retrieve system, kernel and process
+metrics from the pseudo-filesystem proc.
+
+Copyright 2014-2015 The Prometheus Authors
+
+This product includes software developed at
+SoundCloud Ltd. (http://soundcloud.com/).
+
+go-internal
+github.com/rogpeppe/go-internal
+Licensed under the BSD-3-Clause license
+
+Logrus
+github.com/sirupsen/logrus
+Licensed under the MIT license
+
+AFERO
+github.com/spf13/afero
+Licensed under the Apache-2.0 license
+
+pflag
+github.com/spf13/pflag
+Licensed under the BSD-3-Clause license
+
+Objx
+github.com/stretchr/objx
+Licensed under the MIT license
+
+Testify
+github.com/stretchr/testify
+Licensed under the MIT license
+
+goldmark
+github.com/yuin/goldmark
+Licensed under the MIT license
+
+OpenCensus Libraries for Go
+go.opencensus.io
+Licensed under the Apache-2.0 license
+
+Go Cryptography
+golang.org/x/crypto
+Licensed under the BSD-3-Clause license
+
+exp
+golang.org/x/exp
+Licensed under the BSD-3-Clause license
+
+Go Images
+golang.org/x/image
+Licensed under the BSD-3-Clause license
+
+lint
+golang.org/x/lint
+Licensed under the BSD-3-Clause license
+
+Go support for Mobile devices
+golang.org/x/mobile
+Licensed under the BSD-3-Clause license
+
+mod
+golang.org/x/mod
+Licensed under the BSD-3-Clause license
+
+Go Networking
+golang.org/x/net
+Licensed under the BSD-3-Clause license
+
+OAuth2 for Go
+golang.org/x/oauth2
+Licensed under the BSD-3-Clause license
+
+Go Sync
+golang.org/x/sync
+Licensed under the BSD-3-Clause license
+
+sys
+golang.org/x/sys
+Licensed under the BSD-3-Clause license
+
+Go terminal/console support
+golang.org/x/term
+Licensed under the BSD-3-Clause license
+
+Go Text
+golang.org/x/text
+Licensed under the BSD-3-Clause license
+
+Go Time
+golang.org/x/time
+Licensed under the BSD-3-Clause license
+
+Go Tools
+golang.org/x/tools
+Licensed under the BSD-3-Clause license
+
+xerrors
+golang.org/x/xerrors
+Licensed under the BSD-3-Clause license
+
+Google APIs Client Library for Go
+google.golang.org/api
+Licensed under the BSD-3-Clause license
+
+Go App Engine packages
+google.golang.org/appengine
+Licensed under the Apache-2.0 license
+
+Go generated proto packages
+google.golang.org/genproto
+Licensed under the Apache-2.0 license
+
+gRPC-Go
+google.golang.org/grpc
+Licensed under the Apache-2.0 license with the following notice:
+Copyright 2014 gRPC authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+Go support for Protocol Buffers
+google.golang.org/protobuf
+Licensed under the BSD-3-Clause license
+
+Kingpin - A Go (golang) command line and flag parser
+gopkg.in/alecthomas/kingpin.v2
+Licensed under the MIT license
+
+check
+gopkg.in/check.v1
+Licensed under the BSD-3-Clause license
+
+errgo
+gopkg.in/errgo.v2
+Licensed under the BSD-3-Clause license
+
+File system notifications for Go
+gopkg.in/fsnotify.v1
+Licensed under the BSD-3-Clause license
+
+inf
+gopkg.in/inf.v0
+Licensed under the BSD-3-Clause license
+
+lumberjack
+gopkg.in/natefinch/lumberjack.v2
+Licensed under the MIT license
+
+tomb
+gopkg.in/tomb.v1
+Licensed under the BSD-3-Clause license
+
+gopkg.in/yaml.v2
+Licensed under the Apache-2.0 license with the following notice:
+Copyright 2011-2016 Canonical Ltd.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+YAML support for the Go language
+gopkg.in/yaml.v3
+Licensed under the Apache-2.0 license with the following notice:
+Copyright 2011-2016 Canonical Ltd.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+go-tools
+honnef.co/go/tools
+Licensed under the MIT license
+
+api
+k8s.io/api
+Licensed under the Apache-2.0 license
+
+apimachinery
+k8s.io/apimachinery
+Licensed under the Apache-2.0 license
+
+client-go
+k8s.io/client-go
+Licensed under the Apache-2.0 license
+
+gengo
+k8s.io/gengo
+Licensed under the Apache-2.0 license
+
+klog
+k8s.io/klog/v2
+Licensed under the Apache-2.0 license
+
+kube-openapi
+k8s.io/kube-openapi
+Licensed under the Apache-2.0 license
+
+utils
+k8s.io/utils
+Licensed under the Apache-2.0 license
+
+binaryregexp
+rsc.io/binaryregexp
+Licensed under the BSD-3-Clause license
+
+quote
+rsc.io/quote/v3
+Licensed under the BSD-3-Clause license
+
+sampler
+rsc.io/sampler
+Licensed under the BSD-3-Clause license
+
+Structured Merge and Diff
+sigs.k8s.io/structured-merge-diff/v4
+Licensed under the Apache-2.0 license
+
+YAML marshaling and unmarshaling support for Go
+sigs.k8s.io/yaml
+Licensed under the MIT license
+
+
+Licenses:
+MIT License
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+Apache License
+Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+1. Definitions.
+
+"License" shall mean the terms and conditions for use, reproduction, and
+distribution as defined by Sections 1 through 9 of this document.
+
+"Licensor" shall mean the copyright owner or entity authorized by the copyright
+owner that is granting the License.
+
+"Legal Entity" shall mean the union of the acting entity and all other entities
+that control, are controlled by, or are under common control with that entity.
+For the purposes of this definition, "control" means (i) the power, direct or
+indirect, to cause the direction or management of such entity, whether by
+contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
+outstanding shares, or (iii) beneficial ownership of such entity.
+
+"You" (or "Your") shall mean an individual or Legal Entity exercising
+permissions granted by this License.
+
+"Source" form shall mean the preferred form for making modifications, including
+but not limited to software source code, documentation source, and
+configuration files.
+
+"Object" form shall mean any form resulting from mechanical transformation or
+translation of a Source form, including but not limited to compiled object
+code, generated documentation, and conversions to other media types.
+
+"Work" shall mean the work of authorship, whether in Source or Object form,
+made available under the License, as indicated by a copyright notice that is
+included in or attached to the work (an example is provided in the Appendix
+below).
+
+"Derivative Works" shall mean any work, whether in Source or Object form, that
+is based on (or derived from) the Work and for which the editorial revisions,
+annotations, elaborations, or other modifications represent, as a whole, an
+original work of authorship. For the purposes of this License, Derivative Works
+shall not include works that remain separable from, or merely link (or bind by
+name) to the interfaces of, the Work and Derivative Works thereof.
+
+"Contribution" shall mean any work of authorship, including the original
+version of the Work and any modifications or additions to that Work or
+Derivative Works thereof, that is intentionally submitted to Licensor for
+inclusion in the Work by the copyright owner or by an individual or Legal
+Entity authorized to submit on behalf of the copyright owner. For the purposes
+of this definition, "submitted" means any form of electronic, verbal, or
+written communication sent to the Licensor or its representatives, including
+but not limited to communication on electronic mailing lists, source code
+control systems, and issue tracking systems that are managed by, or on behalf
+of, the Licensor for the purpose of discussing and improving the Work, but
+excluding communication that is conspicuously marked or otherwise designated in
+writing by the copyright owner as "Not a Contribution."
+
+"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
+of whom a Contribution has been received by Licensor and subsequently
+incorporated within the Work.
+
+2. Grant of Copyright License. Subject to the terms and conditions of this
+License, each Contributor hereby grants to You a perpetual, worldwide,
+non-exclusive, no-charge, royalty-free, irrevocable copyright license to
+reproduce, prepare Derivative Works of, publicly display, publicly perform,
+sublicense, and distribute the Work and such Derivative Works in Source or
+Object form.
+
+3. Grant of Patent License. Subject to the terms and conditions of this
+License, each Contributor hereby grants to You a perpetual, worldwide,
+non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this
+section) patent license to make, have made, use, offer to sell, sell, import,
+and otherwise transfer the Work, where such license applies only to those
+patent claims licensable by such Contributor that are necessarily infringed by
+their Contribution(s) alone or by combination of their Contribution(s) with the
+Work to which such Contribution(s) was submitted. If You institute patent
+litigation against any entity (including a cross-claim or counterclaim in a
+lawsuit) alleging that the Work or a Contribution incorporated within the Work
+constitutes direct or contributory patent infringement, then any patent
+licenses granted to You under this License for that Work shall terminate as of
+the date such litigation is filed.
+
+4. Redistribution. You may reproduce and distribute copies of the Work or
+Derivative Works thereof in any medium, with or without modifications, and in
+Source or Object form, provided that You meet the following conditions:
+
+You must give any other recipients of the Work or Derivative Works a copy of
+this License; and
+You must cause any modified files to carry prominent notices stating that You
+changed the files; and
+You must retain, in the Source form of any Derivative Works that You
+distribute, all copyright, patent, trademark, and attribution notices from the
+Source form of the Work, excluding those notices that do not pertain to any
+part of the Derivative Works; and
+If the Work includes a "NOTICE" text file as part of its distribution, then any
+Derivative Works that You distribute must include a readable copy of the
+attribution notices contained within such NOTICE file, excluding those notices
+that do not pertain to any part of the Derivative Works, in at least one of the
+following places: within a NOTICE text file distributed as part of the
+Derivative Works; within the Source form or documentation, if provided along
+with the Derivative Works; or, within a display generated by the Derivative
+Works, if and wherever such third-party notices normally appear. The contents
+of the NOTICE file are for informational purposes only and do not modify the
+License. You may add Your own attribution notices within Derivative Works that
+You distribute, alongside or as an addendum to the NOTICE text from the Work,
+provided that such additional attribution notices cannot be construed as
+modifying the License.
+
+You may add Your own copyright statement to Your modifications and may provide
+additional or different license terms and conditions for use, reproduction, or
+distribution of Your modifications, or for any such Derivative Works as a
+whole, provided Your use, reproduction, and distribution of the Work otherwise
+complies with the conditions stated in this License.
+5. Submission of Contributions. Unless You explicitly state otherwise, any
+Contribution intentionally submitted for inclusion in the Work by You to the
+Licensor shall be under the terms and conditions of this License, without any
+additional terms or conditions. Notwithstanding the above, nothing herein shall
+supersede or modify the terms of any separate license agreement you may have
+executed with Licensor regarding such Contributions.
+
+6. Trademarks. This License does not grant permission to use the trade names,
+trademarks, service marks, or product names of the Licensor, except as required
+for reasonable and customary use in describing the origin of the Work and
+reproducing the content of the NOTICE file.
+
+7. Disclaimer of Warranty. Unless required by applicable law or agreed to in
+writing, Licensor provides the Work (and each Contributor provides its
+Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied, including, without limitation, any warranties
+or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+PARTICULAR PURPOSE. You are solely responsible for determining the
+appropriateness of using or redistributing the Work and assume any risks
+associated with Your exercise of permissions under this License.
+
+8. Limitation of Liability. In no event and under no legal theory, whether in
+tort (including negligence), contract, or otherwise, unless required by
+applicable law (such as deliberate and grossly negligent acts) or agreed to in
+writing, shall any Contributor be liable to You for damages, including any
+direct, indirect, special, incidental, or consequential damages of any
+character arising as a result of this License or out of the use or inability to
+use the Work (including but not limited to damages for loss of goodwill, work
+stoppage, computer failure or malfunction, or any and all other commercial
+damages or losses), even if such Contributor has been advised of the
+possibility of such damages.
+
+9. Accepting Warranty or Additional Liability. While redistributing the Work or
+Derivative Works thereof, You may choose to offer, and charge a fee for,
+acceptance of support, warranty, indemnity, or other liability obligations
+and/or rights consistent with this License. However, in accepting such
+obligations, You may act only on Your own behalf and on Your sole
+responsibility, not on behalf of any other Contributor, and only if You agree
+to indemnify, defend, and hold each Contributor harmless for any liability
+incurred by, or claims asserted against, such Contributor by reason of your
+accepting any such warranty or additional liability.
+
+END OF TERMS AND CONDITIONS
+
+BSD-3-Clause License
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice,
+this list of conditions and the following disclaimer in the documentation
+and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its contributors
+may be used to endorse or promote products derived from this software without
+specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+BSD-2-Clause License
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice,
+this list of conditions and the following disclaimer in the documentation
+and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ISC License
+Permission to use, copy, modify, and/or distribute this software for any
+purpose with or without fee is hereby granted, provided that the above
+copyright notice and this permission notice appear in all copies.
+
+THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
+REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
+FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
+INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
+OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
+PERFORMANCE OF THIS SOFTWARE.
+
+Mozilla Public License, version 2.0
+1. Definitions
+
+1.1. "Contributor"
+
+ means each individual or legal entity that creates, contributes to the
+ creation of, or owns Covered Software.
+
+1.2. "Contributor Version"
+
+ means the combination of the Contributions of others (if any) used by a
+ Contributor and that particular Contributor's Contribution.
+
+1.3. "Contribution"
+
+ means Covered Software of a particular Contributor.
+
+1.4. "Covered Software"
+
+ means Source Code Form to which the initial Contributor has attached the
+ notice in Exhibit A, the Executable Form of such Source Code Form, and
+ Modifications of such Source Code Form, in each case including portions
+ thereof.
+
+1.5. "Incompatible With Secondary Licenses"
+ means
+
+ a. that the initial Contributor has attached the notice described in
+ Exhibit B to the Covered Software; or
+
+ b. that the Covered Software was made available under the terms of
+ version 1.1 or earlier of the License, but not also under the terms of
+ a Secondary License.
+
+1.6. "Executable Form"
+
+ means any form of the work other than Source Code Form.
+
+1.7. "Larger Work"
+
+ means a work that combines Covered Software with other material, in a
+ separate file or files, that is not Covered Software.
+
+1.8. "License"
+
+ means this document.
+
+1.9. "Licensable"
+
+ means having the right to grant, to the maximum extent possible, whether
+ at the time of the initial grant or subsequently, any and all of the
+ rights conveyed by this License.
+
+1.10. "Modifications"
+
+ means any of the following:
+
+ a. any file in Source Code Form that results from an addition to,
+ deletion from, or modification of the contents of Covered Software; or
+
+ b. any new file in Source Code Form that contains any Covered Software.
+
+1.11. "Patent Claims" of a Contributor
+
+ means any patent claim(s), including without limitation, method,
+ process, and apparatus claims, in any patent Licensable by such
+ Contributor that would be infringed, but for the grant of the License,
+ by the making, using, selling, offering for sale, having made, import,
+ or transfer of either its Contributions or its Contributor Version.
+
+1.12. "Secondary License"
+
+ means either the GNU General Public License, Version 2.0, the GNU Lesser
+ General Public License, Version 2.1, the GNU Affero General Public
+ License, Version 3.0, or any later versions of those licenses.
+
+1.13. "Source Code Form"
+
+ means the form of the work preferred for making modifications.
+
+1.14. "You" (or "Your")
+
+ means an individual or a legal entity exercising rights under this
+ License. For legal entities, "You" includes any entity that controls, is
+ controlled by, or is under common control with You. For purposes of this
+ definition, "control" means (a) the power, direct or indirect, to cause
+ the direction or management of such entity, whether by contract or
+ otherwise, or (b) ownership of more than fifty percent (50%) of the
+ outstanding shares or beneficial ownership of such entity.
+
+
+2. License Grants and Conditions
+
+2.1. Grants
+
+ Each Contributor hereby grants You a world-wide, royalty-free,
+ non-exclusive license:
+
+ a. under intellectual property rights (other than patent or trademark)
+ Licensable by such Contributor to use, reproduce, make available,
+ modify, display, perform, distribute, and otherwise exploit its
+ Contributions, either on an unmodified basis, with Modifications, or
+ as part of a Larger Work; and
+
+ b. under Patent Claims of such Contributor to make, use, sell, offer for
+ sale, have made, import, and otherwise transfer either its
+ Contributions or its Contributor Version.
+
+2.2. Effective Date
+
+ The licenses granted in Section 2.1 with respect to any Contribution
+ become effective for each Contribution on the date the Contributor first
+ distributes such Contribution.
+
+2.3. Limitations on Grant Scope
+
+ The licenses granted in this Section 2 are the only rights granted under
+ this License. No additional rights or licenses will be implied from the
+ distribution or licensing of Covered Software under this License.
+ Notwithstanding Section 2.1(b) above, no patent license is granted by a
+ Contributor:
+
+ a. for any code that a Contributor has removed from Covered Software; or
+
+ b. for infringements caused by: (i) Your and any other third party's
+ modifications of Covered Software, or (ii) the combination of its
+ Contributions with other software (except as part of its Contributor
+ Version); or
+
+ c. under Patent Claims infringed by Covered Software in the absence of
+ its Contributions.
+
+ This License does not grant any rights in the trademarks, service marks,
+ or logos of any Contributor (except as may be necessary to comply with
+ the notice requirements in Section 3.4).
+
+2.4. Subsequent Licenses
+
+ No Contributor makes additional grants as a result of Your choice to
+ distribute the Covered Software under a subsequent version of this
+ License (see Section 10.2) or under the terms of a Secondary License (if
+ permitted under the terms of Section 3.3).
+
+2.5. Representation
+
+ Each Contributor represents that the Contributor believes its
+ Contributions are its original creation(s) or it has sufficient rights to
+ grant the rights to its Contributions conveyed by this License.
+
+2.6. Fair Use
+
+ This License is not intended to limit any rights You have under
+ applicable copyright doctrines of fair use, fair dealing, or other
+ equivalents.
+
+2.7. Conditions
+
+ Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
+ Section 2.1.
+
+
+3. Responsibilities
+
+3.1. Distribution of Source Form
+
+ All distribution of Covered Software in Source Code Form, including any
+ Modifications that You create or to which You contribute, must be under
+ the terms of this License. You must inform recipients that the Source
+ Code Form of the Covered Software is governed by the terms of this
+ License, and how they can obtain a copy of this License. You may not
+ attempt to alter or restrict the recipients' rights in the Source Code
+ Form.
+
+3.2. Distribution of Executable Form
+
+ If You distribute Covered Software in Executable Form then:
+
+ a. such Covered Software must also be made available in Source Code Form,
+ as described in Section 3.1, and You must inform recipients of the
+ Executable Form how they can obtain a copy of such Source Code Form by
+ reasonable means in a timely manner, at a charge no more than the cost
+ of distribution to the recipient; and
+
+ b. You may distribute such Executable Form under the terms of this
+ License, or sublicense it under different terms, provided that the
+ license for the Executable Form does not attempt to limit or alter the
+ recipients' rights in the Source Code Form under this License.
+
+3.3. Distribution of a Larger Work
+
+ You may create and distribute a Larger Work under terms of Your choice,
+ provided that You also comply with the requirements of this License for
+ the Covered Software. If the Larger Work is a combination of Covered
+ Software with a work governed by one or more Secondary Licenses, and the
+ Covered Software is not Incompatible With Secondary Licenses, this
+ License permits You to additionally distribute such Covered Software
+ under the terms of such Secondary License(s), so that the recipient of
+ the Larger Work may, at their option, further distribute the Covered
+ Software under the terms of either this License or such Secondary
+ License(s).
+
+3.4. Notices
+
+ You may not remove or alter the substance of any license notices
+ (including copyright notices, patent notices, disclaimers of warranty, or
+ limitations of liability) contained within the Source Code Form of the
+ Covered Software, except that You may alter any license notices to the
+ extent required to remedy known factual inaccuracies.
+
+3.5. Application of Additional Terms
+
+ You may choose to offer, and to charge a fee for, warranty, support,
+ indemnity or liability obligations to one or more recipients of Covered
+ Software. However, You may do so only on Your own behalf, and not on
+ behalf of any Contributor. You must make it absolutely clear that any
+ such warranty, support, indemnity, or liability obligation is offered by
+ You alone, and You hereby agree to indemnify every Contributor for any
+ liability incurred by such Contributor as a result of warranty, support,
+ indemnity or liability terms You offer. You may include additional
+ disclaimers of warranty and limitations of liability specific to any
+ jurisdiction.
+
+4. Inability to Comply Due to Statute or Regulation
+
+ If it is impossible for You to comply with any of the terms of this License
+ with respect to some or all of the Covered Software due to statute,
+ judicial order, or regulation then You must: (a) comply with the terms of
+ this License to the maximum extent possible; and (b) describe the
+ limitations and the code they affect. Such description must be placed in a
+ text file included with all distributions of the Covered Software under
+ this License. Except to the extent prohibited by statute or regulation,
+ such description must be sufficiently detailed for a recipient of ordinary
+ skill to be able to understand it.
+
+5. Termination
+
+5.1. The rights granted under this License will terminate automatically if You
+ fail to comply with any of its terms. However, if You become compliant,
+ then the rights granted under this License from a particular Contributor
+ are reinstated (a) provisionally, unless and until such Contributor
+ explicitly and finally terminates Your grants, and (b) on an ongoing
+ basis, if such Contributor fails to notify You of the non-compliance by
+ some reasonable means prior to 60 days after You have come back into
+ compliance. Moreover, Your grants from a particular Contributor are
+ reinstated on an ongoing basis if such Contributor notifies You of the
+ non-compliance by some reasonable means, this is the first time You have
+ received notice of non-compliance with this License from such
+ Contributor, and You become compliant prior to 30 days after Your receipt
+ of the notice.
+
+5.2. If You initiate litigation against any entity by asserting a patent
+ infringement claim (excluding declaratory judgment actions,
+ counter-claims, and cross-claims) alleging that a Contributor Version
+ directly or indirectly infringes any patent, then the rights granted to
+ You by any and all Contributors for the Covered Software under Section
+ 2.1 of this License shall terminate.
+
+5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
+ license agreements (excluding distributors and resellers) which have been
+ validly granted by You or Your distributors under this License prior to
+ termination shall survive termination.
+
+6. Disclaimer of Warranty
+
+ Covered Software is provided under this License on an "as is" basis,
+ without warranty of any kind, either expressed, implied, or statutory,
+ including, without limitation, warranties that the Covered Software is free
+ of defects, merchantable, fit for a particular purpose or non-infringing.
+ The entire risk as to the quality and performance of the Covered Software
+ is with You. Should any Covered Software prove defective in any respect,
+ You (not any Contributor) assume the cost of any necessary servicing,
+ repair, or correction. This disclaimer of warranty constitutes an essential
+ part of this License. No use of any Covered Software is authorized under
+ this License except under this disclaimer.
+
+7. Limitation of Liability
+
+ Under no circumstances and under no legal theory, whether tort (including
+ negligence), contract, or otherwise, shall any Contributor, or anyone who
+ distributes Covered Software as permitted above, be liable to You for any
+ direct, indirect, special, incidental, or consequential damages of any
+ character including, without limitation, damages for lost profits, loss of
+ goodwill, work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses, even if such party shall have been
+ informed of the possibility of such damages. This limitation of liability
+ shall not apply to liability for death or personal injury resulting from
+ such party's negligence to the extent applicable law prohibits such
+ limitation. Some jurisdictions do not allow the exclusion or limitation of
+ incidental or consequential damages, so this exclusion and limitation may
+ not apply to You.
+
+8. Litigation
+
+ Any litigation relating to this License may be brought only in the courts
+ of a jurisdiction where the defendant maintains its principal place of
+ business and such litigation shall be governed by laws of that
+ jurisdiction, without reference to its conflict-of-law provisions. Nothing
+ in this Section shall prevent a party's ability to bring cross-claims or
+ counter-claims.
+
+9. Miscellaneous
+
+ This License represents the complete agreement concerning the subject
+ matter hereof. If any provision of this License is held to be
+ unenforceable, such provision shall be reformed only to the extent
+ necessary to make it enforceable. Any law or regulation which provides that
+ the language of a contract shall be construed against the drafter shall not
+ be used to construe this License against a Contributor.
+
+
+10. Versions of the License
+
+10.1. New Versions
+
+ Mozilla Foundation is the license steward. Except as provided in Section
+ 10.3, no one other than the license steward has the right to modify or
+ publish new versions of this License. Each version will be given a
+ distinguishing version number.
+
+10.2. Effect of New Versions
+
+ You may distribute the Covered Software under the terms of the version
+ of the License under which You originally received the Covered Software,
+ or under the terms of any subsequent version published by the license
+ steward.
+
+10.3. Modified Versions
+
+ If you create software not governed by this License, and you want to
+ create a new license for such software, you may create and use a
+ modified version of this License if you rename the license and remove
+ any references to the name of the license steward (except to note that
+ such modified license differs from this License).
+
+10.4. Distributing Source Code Form that is Incompatible With Secondary
+ Licenses If You choose to distribute Source Code Form that is
+ Incompatible With Secondary Licenses under the terms of this version of
+ the License, the notice described in Exhibit B of this License must be
+ attached.
+
+Exhibit A - Source Code Form License Notice
+
+ This Source Code Form is subject to the
+ terms of the Mozilla Public License, v.
+ 2.0. If a copy of the MPL was not
+ distributed with this file, You can
+ obtain one at
+ http://mozilla.org/MPL/2.0/.
+
+If it is not possible or desirable to put the notice in a particular file,
+then You may include the notice in a location (such as a LICENSE file in a
+relevant directory) where a recipient would be likely to look for such a
+notice.
+
+You may add additional accurate notices of copyright ownership.
+
+Exhibit B - "Incompatible With Secondary Licenses" Notice
+
+ This Source Code Form is "Incompatible
+ With Secondary Licenses", as defined by
+ the Mozilla Public License, v. 2.0.
+
+
+ Software components documented on SCOD are generally covered with valid support contract on the HPE product being used. Terms and conditions may be found in the support contract. Please reach out to your official HPE representative or HPE partner for any uncertainties.
+The HPE CSI Info Metrics Provider for Prometheus is supported by HPE when used with HPE storage arrays on valid support contracts. Send email to support@nimblestorage.com to get started with any issue that requires assistance. Engage your HPE representative for other means to contact HPE Storage support directly.
+Each Container Storage Provider (CSP) uses their own official support routes to resolve any issues with the HPE CSI Driver for Kuberernetes and the respective CSP.
+This software is supported by HPE when used with HPE Nimble Storage arrays on valid support contracts. Please send an email to support@nimblestorage.com to get started with any issue you might need assistance with. Engage with your HPE representative for other means on how to get in touch with Nimble support directly.
+The HPE Alletra 5000/6000 and Nimble Storage organization has made a commitment to our customers to exert reasonable effort in supporting any industry-standard configuration. We do not limit our customers to only what is explicitly listed on SPOCK or the Validated Configuration Matrix (VCM), which lists tested or verified configurations (what HPE Alletra 5000/6000 and Nimble Storage organization commonly refers to as "Qualified" Configurations). Essentially, this means that we will exert reasonable effort to support any industry-standard configuration up to the point where we find, or become aware of, an issue that requires some other course of action*.
+Example cases where support may not be possible include:
+* = In the event where other vendors need to be consulted, the HPE Nimble Support team will not disengage from the Support Action. HPE Nimble Support will continue to partner with the customer and other vendors to search for the correct answers to the issue.
+Limited to the HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) only. Best effort support is available for the CSP for HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Storage with All-inclusive Single or Multi-System software and an active HPE Pointnext support agreement. Since HPE Pointnext support for the CSP is best effort only, any other support levels like Warranty, Foundation Care, Proactive Care, Proactive Care Advanced and Datacenter Care or other support levels do not apply. Best effort response times are based on local standard business days and working hours. If your location is outside the customary service zone, response time may be longer.
+HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Hardware Contract Type | +Phone Number | +
---|---|
Warranty and Foundation Care | +800-633-3600 | +
Proactive Care (PC) | +866-211-5211 | +
Datacenter Care (DC) | +888-751-2149 | +
Amazon Elastic Kubernetes Service (EKS) Anywhere allows customers to deploy Amazon EKS-D (Amazon Elastic Kubernetes Service Distro) on their private or non-AWS clouds. AWS users familiar with the ecosystem gain the ability to cross clouds and manage their Kubernetes estate in a single pane of glass.
+This documentation outlines the limitations and considerations when using the HPE CSI Driver for Kubernetes when deployed on EKS-D.
+These limitations may be expanded or detracted in future releases of either Amazon EKS Anywhere or HPE CSI Driver.
+The default Linux distribution AWS favors is Bottlerocket OS which is a container-optimized distribution. Due to the slim host library and binary surface, Bottlerocket OS does not include the necessary utilities to support SAN storage. This limitation can be tracked in this GitHub issue.
+Note
+Any other OS supported by EKS-A and is listed in the Compatibility and Support table is supported by the HPE CSI Driver.
+Only iSCSI is supported as the HPE CSI Driver does not support NPIV which is required for virtual Fibre Channel host bus adapters (HBA). More information on this limitation is elaborated on in the VMware section on SCOD.
+Due to VSphereMachineConfig
VM templates only allow a single vNIC, no multipath redundancy is available to the host. Ensure network fault tolerance according to VMware best practices is available to the VM. Also keep in mind that the backend storage system needs to have a data interface in the same subnet as the HPE CSI Driver will not try to discover targets over routed networks.
Tip
+The vSphere CSI Driver and HPE CSI Driver may co-exist in the same cluster but make sure there's only one default StorageClass
configured before creating PersistentVolumeClaims
. Please see the official Kubernetes documentation on how to change the default StorageClass
.
EKS-D is a CNCF compliant Kubernetes distribution and no special steps are required to deploy the HPE CSI Driver for Kubernetes. It's crucial to ensure that the compute nodes run a supported OS and the version of Kubernetes is supported by the HPE CSI Driver. Check the Compatibility and Support table for more information.
+Proceed to installation documentation:
+"Canonical Kubernetes is pure upstream and works on any cloud, from bare metal to public and edge. Deploy single node and multi-node clusters with Charmed Kubernetes and MicroK8s to support container orchestration, from testing to production. Both distributions bring the latest innovations from the Kubernetes community within a week of upstream release, allowing for time to learn, experiment and upskill."1
+
+HPE supports Ubuntu LTS releases along with recent upstream versions of Kubernetes for the HPE CSI Driver. As long as the CSI driver is installed on a supported host OS with a CNCF certified Kubernetes distribution, the solution is supported.
Both Charmed Kubernetes on private cloud and MicroK8s for edge has been field tested with the HPE CSI Driver for Kubernetes by HPE.
+Charmed Kubernetes is deployed with the Juju orchestration engine. Juju is capable of deploying and managing the full life-cycle of CNCF certified Kubernetes on various infrastructure providers, both private and public. Charmed utilize Ubuntu LTS for the node OS.
+It's most relevant for HPE CSI Driver users when deployed on Canonical MAAS and VMware vSphere.
+Note
+Canonical MAAS has not been formally tested at this time to provide guidance but the solution is supported by HPE.
+No special considerations needs to be taken when installing the HPE CSI Driver on Charmed Kubernetes. It's recommended to use the Helm chart.
+When the chart is installed, Add an HPE Storage Backend.
+MicroK8s is an opinionated lightweight fully certified CNCF Kubernetes distribution. It's easy to install and manage.
+Important
+Older versions of MicroK8s did not allow the CSI driver privileged Pods
and some tweaking may be needed in the controller-manager of MicroK8s. Please use a recent version of MicroK8s and Ubuntu LTS to avoid problems.
As MicroK8s is installed with confinement using snap
, the "kubeletRootDir" needs to be configured when installing the Helm chart or Operator. Advanced install with YAML is strongly discouraged.
Install the Helm chart:
+
microk8s helm install --create-namespace \
+ --set kubeletRootDir=/var/snap/microk8s/common/var/lib/kubelet \
+ -n hpe-storage my-hpe-csi-driver hpe-storage/hpe-csi-driver
+
+Go ahead and Add an HPE Storage Backend.
+Hint
+When installing the chart on other Linux distributions than Ubuntu LTS, the "kubeletRootDir" will most likely differ.
+HPE and Canonical have partnered to create integration guides with Charmed Kubernetes for the different storage backends.
+These integration guides are also available on ubuntu.com/engage.
+ +Hewlett Packard Enterprise and Cohesity offer an integrated approach to solve customer problems commonly found with containerized workloads. HPE Alletra—leveraging the HPE CSI Driver for Kubernetes—together with Cohesity's comprehensive data protection capabilities, empower organizations to overcome challenges associated with containerized environments.
+This guide will demonstrate the steps to integrate Cohesity into a Kubernetes cluster and how to configure a protection policy to back up an application Namespace
, a Kubernetes resource type. It proceeds to show that a backup can be restored to a new Namespace
, useful for providing a test/development environment without affecting the original application Namespace
.
External HPE Resources:
+Cohesity solutions are available through HPE Complete.
+
The HPE CSI Driver has been validated on Cohesity DataProtect v7.0u1. +Check that the HPE CSI Driver and Cohesity software versions are compatible with the Kubernetes version being used.
+This environment assumes the HPE CSI Driver for Kubernetes is deployed in the Kubernetes cluster, an Alletra storage backend has been configured, and a default StorageClass
has been defined.
Review Cohesity's "Plan and Prepare" documentation to accomplish the following:
+ServiceAccount
with cluster-admin
permissions.ServiceAccount
Note
+Cohesity only supports the backup of user-created application Namespaces
and does not support the backup of infrastructure Namespaces
such as kube-system
, etc.
Review Cohesity's "Register and Manage Kubernetes Cluster" documentation to integrate Cohesity into your Kubernetes cluster. Below is an example screenshot of the Register Kubernetes Source dialog:
After the integration wizard is submitted, see the Post-Registration task documentation to verify Velero and datamover pod availability.
+Note
+The latest versions of Kubernetes, although present in the Cohesity support matrix, may still require an override from Cohesity support.
+A Namespace
containing a WordPress application will be protected in this example. It contains a variety of Kubernetes resources and objects including:
PersistentVolumeClaim
, ConfigMap
, and Secret
Service
and ServiceAccount
Deployment
, ReplicaSet
and StatefulSet
Review the Protect Kubernetes Namespaces documentation from Cohesity. Create a new protection policy or use an available default policy. Additionally, see the Manage the Kubernetes Backup Configuration documentation to add/remove Namespaces
to a protection group, adjust Auto Protect settings, modify the Protection Policy, and trigger an on-demand run.
See the screenshot below for an example backup Run details view.
Review the Cohesity documentation for Recover Kubernetes Cluster. Cohesity notes, at time of writing, that granular-level recovery of Namespace
resource types is not supported. Consider the following when defining a recovery operation:
Namespace
. If a protection group is chosen, multiple Namespace
resources could be affected on recovery.Namespaces
will not be managed with Helm.StorageClass
as the backup’s source cluster.Note
+Protection groups and individual Namespace
resources appear in the same list. Available Namespaces
are denoted with the Kubernetes ship wheel icon.
For this example, a WordPress Namespace
backup will be restored to the source Kubernetes cluster but under a new Namespace
with a "debug-" prefix (see below). This application can run alongside and separately from the parent application.
After the recovery process is complete we can review and compare the associated objects between the two Namespaces
. In particular, names are similar but discrete PersistentVolumes
, IPs and Services
exist for each Namespace
.
$ diff <(kubectl get all,pvc -n wordpress-orig) <(kubectl get all,pvc -n debug-wordpress-orig)
+2,3c2,3
+- pod/wordpress-577cc47468-mbg2n 1/1 Running 0 171m
+- pod/wordpress-mariadb-0 1/1 Running 0 171m
+---
++ pod/wordpress-577cc47468-mbg2n 1/1 Running 0 57m
++ pod/wordpress-mariadb-0 1/1 Running 0 57m
+6,7c6,7
+- service/wordpress LoadBalancer 10.98.47.101 <pending> 80:30657/TCP,443:30290/TCP 171m
+- service/wordpress-mariadb ClusterIP 10.104.190.60 <none> 3306/TCP 171m
+---
++ service/wordpress LoadBalancer 10.109.247.83 <pending> 80:31425/TCP,443:31002/TCP 57m
++ service/wordpress-mariadb ClusterIP 10.101.77.139 <none> 3306/TCP 57m
+10c10
+- deployment.apps/wordpress 1/1 1 1 171m
+---
++ deployment.apps/wordpress 1/1 1 1 57m
+13c13
+- replicaset.apps/wordpress-577cc47468 1 1 1 171m
+---
++ replicaset.apps/wordpress-577cc47468 1 1 1 57m
+16c16
+- statefulset.apps/wordpress-mariadb 1/1 171m
+---
++ statefulset.apps/wordpress-mariadb 1/1 57m
+19,20c19,20
+- persistentvolumeclaim/data-wordpress-mariadb-0 Bound pvc-4b3222c3-f71f-427f-847b-d6d0c5e019a4 8Gi RWO a9060-std 171m
+- persistentvolumeclaim/wordpress Bound pvc-72158104-06ae-4547-9f80-d551abd7cda5 10Gi RWO a9060-std 171m
+---
++ persistentvolumeclaim/data-wordpress-mariadb-0 Bound pvc-306164a8-3334-48ac-bdee-273ac9a97403 8Gi RWO a9060-std 59m
++ persistentvolumeclaim/wordpress Bound pvc-17a55296-d0fb-44c2-968b-09c6ffc4abc9 10Gi RWO a9060-std 59m
+
+Note
+Above links are external to docs.cohesity.com and require a MyCohesity account.
+The Commvault intelligent data management platform provides Kubernetes-native protection, application mobility, and disaster recovery for containerized applications. Combined with Commvault Command Center™, Commvault provides enterprise IT operations and DevOps teams an easy-to-use, self-service dashboard for managing the protection of Kubernetes.
+HPE and Commvault collaborate continuously to deliver assets relevant to our joint customers.
+Learn more about HPE and Commvault's partnership here: https://www.commvault.com/supported-technologies/hpe.
+The HPE CSI Driver has been validated on Commvault Complete Backup and Recovery 2022E. +Check that the HPE CSI Driver and Commvault software versions are compatible with the Kubernetes version being used.
+This guide assumes you have administrative access to Commvault Command Center and administrator access to a Kubernetes cluster with kubectl
. Refer to the Creating a Service Account for Kubernetes Authentication documentation to define a serviceaccount
and clusterrolebinding
with cluster-admin
permissions.
The cluster needs to be running Kubernetes 1.22 or later and have the CSI snapshot CustomResourceDefinitions
(CRDs) and the CSI external snapshotter deployed. Follow the guides available on SCOD to:
Note
+The rest of this guide assumes the default VolumeSnapshotClass
and VolumeSnapshots
are functional within the cluster with a compatible Kubernetes snapshot API level between the CSI driver and Commvault.
To configure data protection for Kubernetes, follow the official Commvault documentation and ensure the version matches the software version in your environment. +As a summary, complete the following:
+To perform snapshot and restore operations through Commvault using the HPE CSI Driver for Kubernetes, please refer to the Commvault documentation.
+ +Note
+Above links are external to documentation.commvault.com.
+Tip
+The HPE CSI Driver for Kubernetes will work on any CNCF certified Kubernetes distribution. Verify compute node OS and Kubernetes version in the Compatibility and Support table.
+Kasten K10 by Veeam is a data management platform designed to run natively on Kubernetes to protect applications. K10 integrates seamlessly with the HPE CSI Driver for Kubernetes thanks to the native support for CSI VolumeSnapshots
and VolumeSnapshotClasses
.
HPE and Veeam have a long-standing alliance. Read about the extended partnership with Kasten in this blog post.
+Tip
+All the steps below are captured in a tutorial available on YouTube and in the SCOD Video Gallery.
+The cluster needs to be running Kubernetes 1.17 or later and have the CSI snapshot CustomResourceDefinitions
(CRDs) and the CSI snapshot-controller deployed. Follow the guides available on SCOD to:
Note
+The rest of this guide assumes a default VolumeSnapshotClass
and VolumeSnapshots
are functional on the cluster.
In order to allow K10 to perform snapshots and restores using the VolumeSnapshotClass
, it needs an annotation.
Assuming we have a default VolumeSnapshotClass
named "hpe-snapshot":
kubectl annotate volumesnapshotclass hpe-snapshot k10.kasten.io/is-snapshot-class=true
+
+Kasten K10 installs in its own namespace using a Helm chart. It also assumes there's a performant default StorageClass
on the cluster to serve the various PersistentVolumeClaims
needed for the controllers.
Note
+Above links are external to docs.kasten.io.
+Kasten K10 provides the user with a graphical interface and dashboard to schedule and perform data management operations. There's also an API that can be manipulated with kubectl
using CRDs.
To perform snapshot and restore operations through Kasten K10 using the HPE CSI Driver for Kubernetes, please refer to the Kasten K10 documentation.
+ +Note
+Above links are external to docs.kasten.io.
+Mirantis Kubernetes Engine (MKE) is the successor of the Universal Control Plane part of Docker Enterprise Edition (Docker EE). The HPE CSI Driver for Kubernetes allows users to provision persistent storage for Kubernetes workloads running on MKE. See the note below on Docker Swarm for workloads deployed outside of Kubernetes.
+Mirantis and HPE perform testing and qualification as needed for either release of MKE or the HPE CSI Driver. If there are any deviations in the installation procedures, those will be documented here.
+MKE Version | +HPE CSI Driver | +Status | +Installation Notes | +
---|---|---|---|
3.7 | +2.4.0 | +Supported | +Helm chart notes | +
3.6 | +2.2.0 | +Supported | +Helm chart notes | +
3.4, 3.5 | +- | +Untested | +- | +
3.3 | +2.0.0 | +Deprecated | +Advanced Install notes for MKE 3.3 | +
Seealso
+Ensure to be understood with the limitations and the lack of Docker Swarm support.
+With MKE 3.6 and onwards, it's recommend to use the HPE CSI Driver for Kubernetes Helm chart. There are no known caveats or workarounds at this time.
+Important
+Always ensure the MKE version of the underlying Kubernetes version and worker node host OS conforms to the latest compatability and support table.
+At the time of release of MKE 3.3, neither of the HPE CSI Driver Helm chart or operator will install correctly.
+The MKE managers and workers needs to run a supported host OS as outlined in the particular version of the HPE CSI Driver found in the release tables. Also verify that the HPE CSI Driver support the version Kubernetes used by MKE (see below).
+MKE admins needs to familiarize themselves with the advanced install method of the CSI driver. Before the installation begins, make sure an account with administrative privileges is being used to the deploy the driver. Also determine the actual Kubernetes version MKE is using.
+
kubectl version --short true
+Client Version: v1.19.4
+Server Version: v1.18.10-mirantis-1
+
+In this particular example, Kubernetes 1.18 is being used. Follow the steps for 1.18 highlighted within the advanced install section of the deployment documentation.
+ConfigMap
.Next, add a supported HPE backend and create a StorageClass
.
Learn more about using the CSI objects in the comprehensive overview. Also make sure to familiarize yourself with the particular features and capabilities of the backend being used.
+ +Provisioning Docker Volumes for Docker Swarm workloads from a HPE primary storage backend is deprecated.
++HPE and Red Hat have a long standing partnership to provide jointly supported software, platform and services with the absolute best customer experience in the industry.
+Red Hat OpenShift uses open source Kubernetes and various other components to deliver a PaaS experience that benefits both developers and operations. This packaged experience differs slightly on how you would deploy and use the HPE volume drivers and this page serves as the authoritative source for all things HPE primary storage and Red Hat OpenShift.
+Software deployed on OpenShift 4 follows the Operator pattern. CSI drivers are no exception.
+Software delivered through the HPE and Red Hat partnership follows a rigorous certification process and only qualify what's listed as "Certified" in the below table.
+Status | +Red Hat OpenShift | +HPE CSI Operator | +Container Storage Providers | +
---|---|---|---|
Certified | +4.16 EUS2 | +2.5.1 | +All | +
Certified | +4.15 | +2.4.1, 2.4.2, 2.5.1 | +All | +
Certified | +4.14 EUS2 | +2.4.0, 2.4.1, 2.4.2, 2.5.1 | +All | +
Certified | +4.13 | +2.4.0, 2.4.1, 2.4.2 | +All | +
Certified | +4.12 EUS2 | +2.3.0, 2.4.0, 2.4.1, 2.4.2 | +All | +
EOL1 | +4.11 | +2.3.0 | +All | +
EOL1 | +4.10 EUS2 | +2.2.1, 2.3.0 | +All | +
1 = End of life support per Red Hat OpenShift Life Cycle Policy.
+2 = Red Hat OpenShift Extended Update Support.
Check the table above periodically for future releases.
+Pointers
+PVCs
using "RWX" with volumeMode: Block
. See below for more details.By default, OpenShift prevents containers from running as root. Containers are run using an arbitrarily assigned user ID. Due to these security restrictions, containers that run on Docker and Kubernetes might not run successfully on Red Hat OpenShift without modification.
+Users deploying applications that require persistent storage (i.e. through the HPE CSI Driver) will need the appropriate permissions and Security Context Constraints (SCC) to be able to request and manage storage through OpenShift. Modifying container security to work with OpenShift is outside the scope of this document.
+For more information on OpenShift security, see Managing security context constraints.
+Note
+If you run into issues writing to persistent volumes provisioned by the HPE CSI Driver under a restricted SCC, add the fsMode: "0770"
parameter to the StorageClass
with RWO claims or fsMode: "0777"
for RWX claims.
Since the CSI Operator only provides "Basic Install" capabilities. The following limitations apply:
+ConfigMap
"hpe-linux-config" that controls host configuration is immutablePersistentVolumeClaims
as part of the installation. See #295 on GitHub.Namespace
other than "hpe-nfs" requires a separate SCC applied to the Namespace
. See #nfs_server_provisioner_considerations.The HPE CSI Operator for Kubernetes needs to be installed through the interfaces provided by Red Hat. Do not follow the instructions found on OperatorHub.io.
+Tip
+There's a tutorial available on YouTube accessible through the Video Gallery on how to install and use the HPE CSI Operator on Red Hat OpenShift.
+In situations where the operator needs to be upgraded, follow the prerequisite steps in the Helm chart on Artifact Hub.
+ +Automatic Updates
+Do not under any circumstance enable "Automatic Updates" for the HPE CSI Operator for Kubernetes
+Once the steps have been followed for the particular version transition:
+HPECSIDriver
instanceCRD
oc delete crd/hpecsidrivers.storage.hpe.com
Good to know
+Deleting the HPECSIDriver
instance and uninstalling the CSI Operator does not affect any running workloads, PersistentVolumeClaims
, StorageClasses
or other API resources created by the CSI Operator. In-flight operations and new requests will be retried once the new HPECSIDriver
has been instantiated.
The HPE CSI Driver needs to run in privileged mode and needs access to host ports, host network and should be able to mount hostPath volumes. Hence, before deploying HPE CSI Operator on OpenShift, please create the following SecurityContextConstraints
(SCC) to allow the CSI driver to be running with these privileges.
oc new-project hpe-storage --display-name="HPE CSI Driver for Kubernetes"
+
+Important
+The rest of this implementation guide assumes the default "hpe-storage" Namespace
. If a different Namespace
is desired. Update the ServiceAccount
Namespace
in the SCC below.
Deploy or download the SCC:
+
oc apply -f https://scod.hpedev.io/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml
+securitycontextconstraints.security.openshift.io/hpe-csi-controller-scc created
+securitycontextconstraints.security.openshift.io/hpe-csi-node-scc created
+securitycontextconstraints.security.openshift.io/hpe-csi-csp-scc created
+securitycontextconstraints.security.openshift.io/hpe-csi-nfs-scc created
+
+Once the SCC has been applied to the project, login to the OpenShift web console as kube:admin
and navigate to Operators -> OperatorHub.
+Search for 'HPE CSI' in the search field and select the non-marketplace version.
++Click 'Install'.
+Note
+Latest supported HPE CSI Operator on OpenShift 4.14 is 2.4.2
++Select the Namespace where the SCC was applied, select 'Manual' Update Approval, click 'Install'.
++Click 'Approve' to finalize installation of the Operator
++The HPE CSI Operator is now installed, select 'View Operator'.
++Click 'Create Instance'.
++Normally, no customizations are needed, scroll all the way down and click 'Create'.
+By navigating to the Developer view, it should now be possible to inspect the CSI driver and Operator topology.
+ +The CSI driver is now ready for use. Next, an HPE storage backend needs to be added along with a StorageClass
.
See Caveats below for information on creating StorageClasses
in Red Hat OpenShift.
This provides an example Operator deployment using oc
. If you want to use the web console, proceed to the previous section.
It's assumed the SCC has been applied to the project and have kube:admin
privileges. As an example, we'll deploy to the hpe-storage
project as described in previous steps.
First, an OperatorGroup
needs to be created.
apiVersion: operators.coreos.com/v1
+kind: OperatorGroup
+metadata:
+ name: hpe-csi-driver-for-kubernetes
+ namespace: hpe-storage
+spec:
+ targetNamespaces:
+ - hpe-storage
+
+Next, create a Subscription
to the Operator.
apiVersion: operators.coreos.com/v1alpha1
+kind: Subscription
+metadata:
+ name: hpe-csi-operator
+ namespace: hpe-storage
+spec:
+ channel: stable
+ installPlanApproval: Manual
+ name: hpe-csi-operator
+ source: certified-operators
+ sourceNamespace: openshift-marketplace
+
+Next, approve the installation.
+
oc -n hpe-storage patch $(oc get installplans -n hpe-storage -o name) -p '{"spec":{"approved":true}}' --type merge
+
+The Operator will now be installed on the OpenShift cluster. Before instantiating a CSI driver, watch the roll-out of the Operator.
+
oc rollout status deploy/hpe-csi-driver-operator -n hpe-storage
+Waiting for deployment "hpe-csi-driver-operator" rollout to finish: 0 of 1 updated replicas are available...
+deployment "hpe-csi-driver-operator" successfully rolled out
+
+The next step is to create a HPECSIDriver
object.
# oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+ name: hpecsidriver-sample
+spec:
+ # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+ controller:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ tolerations: []
+ csp:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ tolerations: []
+ disable:
+ alletra6000: false
+ alletra9000: false
+ alletraStorageMP: false
+ nimble: false
+ primera: false
+ disableHostDeletion: false
+ disableNodeConfiguration: false
+ disableNodeConformance: false
+ disableNodeGetVolumeStats: false
+ disableNodeMonitor: false
+ imagePullPolicy: IfNotPresent
+ images:
+ csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
+ csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0
+ csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7
+ csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0
+ csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
+ csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
+ csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
+ csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
+ csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6
+ csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6
+ csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6
+ nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5
+ nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0
+ primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0
+ iscsi:
+ chapSecretName: ""
+ kubeletRootDir: /var/lib/kubelet
+ logLevel: info
+ node:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ tolerations: []
+
# oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+ name: hpecsidriver-sample
+spec:
+ # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+ controller:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ csp:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ disable:
+ alletra6000: false
+ alletra9000: false
+ alletraStorageMP: false
+ nimble: false
+ primera: false
+ disableNodeConfiguration: false
+ disableNodeConformance: false
+ disableNodeGetVolumeStats: false
+ imagePullPolicy: IfNotPresent
+ iscsi:
+ chapPassword: ""
+ chapUser: ""
+ kubeletRootDir: /var/lib/kubelet/
+ logLevel: info
+ node:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ registry: quay.io
+
+
+
# oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+ name: hpecsidriver-sample
+spec:
+ # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+ controller:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ csp:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ disable:
+ alletra6000: false
+ alletra9000: false
+ alletraStorageMP: false
+ nimble: false
+ primera: false
+ disableNodeConfiguration: false
+ disableNodeConformance: false
+ disableNodeGetVolumeStats: false
+ imagePullPolicy: IfNotPresent
+ iscsi:
+ chapPassword: ""
+ chapUser: ""
+ kubeletRootDir: /var/lib/kubelet/
+ logLevel: info
+ node:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ tolerations: []
+ registry: quay.io
+
+
+
The CSI driver is now ready for use. Next, an HPE storage backend needs to be added along with a StorageClass
.
At this point the CSI driver is managed like any other Operator on Kubernetes and the life-cycle management capabilities may be explored further in the official Red Hat OpenShift documentation.
+When uninstalling an operator managed by OLM, a Cluster Admin must decide whether or not to remove the CustomResourceDefinitions
(CRD), APIServices
, and resources related to these types owned by the operator. By design, when OLM uninstalls an operator it does not remove any of the operator’s owned CRDs
, APIServices
, or CRs
in order to prevent data loss.
Important
+Do not modify or remove these CRDs
or APIServices
if you are upgrading or reinstalling the HPE CSI driver in order to prevent data loss.
The following are CRDs
installed by the HPE CSI driver.
hpecsidrivers.storage.hpe.com
+hpenodeinfos.storage.hpe.com
+hpereplicationdeviceinfos.storage.hpe.com
+hpesnapshotgroupinfos.storage.hpe.com
+hpevolumegroupinfos.storage.hpe.com
+hpevolumeinfos.storage.hpe.com
+snapshotgroupclasses.storage.hpe.com
+snapshotgroupcontents.storage.hpe.com
+snapshotgroups.storage.hpe.com
+volumegroupclasses.storage.hpe.com
+volumegroupcontents.storage.hpe.com
+volumegroups.storage.hpe.com
+
+The following are APIServices
installed by the HPE CSI driver.
v1.storage.hpe.com
+v2.storage.hpe.com
+
+Please refer to the OLM Lifecycle Manager documentation on how to safely Uninstall your operator.
+When deploying NFS servers on OpenShift there's currently two things to keep in mind for a successful deployment. Also, be understood with the Limitations and Considerations for the NFS Server Provisioner in general.
+If NFS servers are deployed in a different Namespace
than the default "hpe-nfs" by using the "nfsNamespace" StorageClass
parameter, the "hpe-csi-nfs-scc" SCC needs to be updated to include the Namespace
ServiceAccount
.
This example adds "my-namespace" NFS server ServiceAccount
to the SCC:
oc patch scc hpe-csi-nfs-scc --type=json -p='[{"op": "add", "path": "/users/-", "value": "system:serviceaccount:my-namespace:hpe-csi-nfs-sa" }]'
+
+Object references in OpenShift are not compatible with the NFS Server Provisioner. If a user deploys an Operator of any kind that creates a NFS server backed PVC
, the operation will fail. Instead, pre-provision the PVC
manually for the Operator instance to use.
On certain versions of OpenShift the NFS clients may experience stale NFS file handles like the one below when the NFS server is being restarted.
+
Error: failed to resolve symlink "/var/lib/kubelet/pods/290ff9e1-cc1e-4d05-b884-0ddcc05a9631/volumes/kubernetes.io~csi/pvc-321cf523-c063-4ce4-97e8-bc1365b8a05b/mount": lstat /var/lib/kubelet/pods/290ff9e1-cc1e-4d05-b884-0ddcc05a9631/volumes/kubernetes.io~csi/pvc-321cf523-c063-4ce4-97e8-bc1365b8a05b/mount: stale NFS file handle
+
+If this problem occurs, use the ext4 filesystem on the backing volumes. The fsType
is set in the StorageClass
. Example:
...
+parameters:
+ csi.storage.k8s.io/fstype: ext4
+...
+
+If OpenShift Virtualization is being used and Live Migration is desired for virtual machines PVCs
cloned from the "openshift-virtualization-os-images" Namespace
, the StorageProfile
needs to be updated to "ReadWriteMany".
Info
+These steps are not necessary on recent OpenShift EUS (v4.12.11 onwards) releases as the default StorageProfile
for "csi.hpe.com" has been corrected upstream.
If the default StorageClass
is named "hpe-standard", issue the following command:
oc edit -n openshift-cnv storageprofile hpe-standard
+
+Replace the spec: {}
with the following:
spec:
+ claimPropertySets:
+ - accessModes:
+ - ReadWriteMany
+ volumeMode: Block
+
+Ensure there are no errors. Recreate the OS images:
+
oc delete pvc -n openshift-virtualization-os-images --all
+
+Inspect the PVCs
and ensure they are re-created with "RWX":
oc get pvc -n openshift-virtualization-os-images -w
+
+Hint
+The "accessMode" transformation for block volumes from RWO PVC to RWX clone has been resolved in HPE CSI Driver v2.5.0. Regardless, using source RWX PVs will simplify the workflows for users.
+With HPE CSI Operator for Kubernetes v2.4.2 and older there's an issue that prevents live migrations of VMs that has PVCs
attached that has been clones from an OS image residing on Alletra Storage MP backends including 3PAR, Primera and Alletra 9000.
Identify the PVC
that that has been cloned from an OS image. The VM name is "centos7-silver-bedbug-14" in this case.
oc get vm/centos7-silver-bedbug-14 -o jsonpath='{.spec.template.spec.volumes}' | jq
+
+In this instance, the dataVolume
is the same name as the VM. Grab the PV
name from the PVC
name.
MY_PV_NAME=$(oc get pvc/centos7-silver-bedbug-14 -o jsonpath='{.spec.volumeName}')
+
+Next, patch the hpevolumeinfo
CRD
.
oc patch hpevolumeinfo/${MY_PV_NAME} --type=merge --patch '{"spec": {"record": {"MultiInitiator": "true"}}}'
+
+The VM is now ready to be migrated.
+Hint
+If there are multiple dataVolumes
, each one needs to be patched.
In the event on older version of the Operator needs to be installed, the bundle can be installed directly by installing the Operator SDK. Make sure a recent version of the operator-sdk
binary is available and that no HPE CSI Driver is currently installed on the cluster.
Install a specific version prior and including v2.4.2:
+
operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle:v2.4.2
+
+Install a specific version after and including v2.5.0:
+
operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle-ocp:v2.5.0
+
+Important
+Once the Operator is installed, a HPECSIDriver
instance needs to be created. Follow the steps using the web console or the CLI to create an instance.
When the unsupported install isn't needed any longer, run:
+
operator-sdk cleanup -n hpe-storage hpe-csi-operator
+
+In the event Red Hat releases a new version of OpenShift between HPE CSI Driver releases or if interest arises to run the HPE CSI Driver on an uncertified version of OpenShift, it's possible to install the CSI driver using the Helm chart instead.
+It's not recommended to install the Helm chart unless it's listed as "Field Tested" in the support matrix above.
+Tip
+Helm chart install is also only current method to use beta releases of the HPE CSI Driver.
+SCC
in the Namespace
(Project) you wish to install the driver.Unsupported
+Understand that this method is not supported by Red Hat and not recommended for production workloads or clusters.
+"Harvester is a modern hyperconverged infrastructure (HCI) solution built for bare metal servers using enterprise-grade open-source technologies including Linux, KVM, Kubernetes, KubeVirt, and Longhorn. Designed for users looking for a flexible and affordable solution to run cloud-native and virtual machine (VM) workloads in your datacenter and at the edge, Harvester provides a single pane of glass for virtualization and cloud-native workload management."1
+HPE supports the underlying host OS, SLE Micro, using the HPE CSI Driver for Kubernetes and the Rancher Kubernetes Engine 2 (RKE2) which is a CNCF certified Kubernetes distribution. Harvester embeds KubeVirt and uses standard CSI storage contructs to manage storage resoruces for virtual machines.
+Many of the features provided by Harvester stem from the capabilities of KubeVirt. The HPE CSI Driver for Kubernetes provides "ReadWriteMany" block storage which allows seamless migration of VMs between hosts with disks attached. The NFS Server Provisioner may be used by disparate VMs that needs "ReadWriteMany" to share data.
+These limitatons are framed around the integration of the HPE CSI Driver for Kubernetes and Harvester. Other limitations may apply.
+Since Harvester is a hyper-converged infrastructure platform in its own right, the storage components are already embedded in the platform using Longhorn. Longhorn is designed to run from local server storage and today it's not practical to replace Longhorn with CSI capable storage from HPE. The Harvester servers may use boot from SAN and other means in terms of external storage to provide capacity to Longhorn but Longhorn would still be used to create VM images and machines.
+Storage provided by platforms supported by the HPE CSI Driver for Kubernetes is complementary and non-boot disks may be easily provisioned and attached to VM workloads.
+Info
+The VM boot limitation is solely implemented by Harvester in front of KubeVirt. Any other KubeVirt platform would allow booting from storage resources provided by HPE CSI Driver for Kubernetes.
+As per best practice HPE recommends using dedicated iSCSI networks for data traffic between the Harvester nodes and the storage platform.
+Ancillary network configuration of Harvester nodes is managed as a post-install step. Creating network configuration files for Harvester nodes is beyond the scope of this document. Follow the guides provided by Harvester.
+ +In a typical setup the IP addresses are assigned by DHCP on the NIC directly without any bridges, VLANs or bonds. The updates that needs to be done to /oem/90_custom.yaml
on each compute node to reflect this configuration are described below.
Insert the block after the management interface configuration and replace the interface names ens224
and ens256
with the actual interface names on your compute nodes. List the available interfaces on the compute node prompt with ip link
.
...
+ - path: /etc/sysconfig/network/ifcfg-ens224
+ permissions: 384
+ owner: 0
+ group: 0
+ content: |
+ STARTMODE='onboot'
+ BOOTPROTO='dhcp'
+ DHCLIENT_SET_DEFAULT_ROUTE='no'
+ encoding: ""
+ ownerstring: ""
+ - path: /etc/sysconfig/network/ifcfg-ens256
+ permissions: 384
+ owner: 0
+ group: 0
+ content: |
+ STARTMODE='onboot'
+ BOOTPROTO='dhcp'
+ DHCLIENT_SET_DEFAULT_ROUTE='no'
+ encoding: ""
+ ownerstring: ""
+ ...
+
+Reboot the node and verify that IP addresses have been assigned to the NICs by running ip addr show dev <interface name>
on the compute node prompt.
The HPE CSI Driver for Kubernetes is installed on Harvester by using the standard procedures for installing the CSI driver with Helm. Helm require access to the Harvester cluster through the Kubernetes API. You can download the Harvester cluster KubeConfig file by visiting the dashboard on your cluster and click "support" in the lower left corner of the UI.
+ +Note
+It does not matter if Harvester is managed by Rancher or running standalone. If the cluster is managed by Rancher, then go to the Virtualization Management dashboard and select "Download KubeConfig" in the dotted context menu of the cluster.
+SUSE Rancher provides a platform to deploy Kubernetes-as-a-service everywhere. HPE partners with SUSE Rancher to provide effortless management of the CSI driver on managed Kubernetes clusters. This allows our joint customers and channel partners to enable hybrid cloud stateful workloads on Kubernetes.
+Rancher is capable of managing Kubernetes across a broad spectrum of managed and BYO clusters. It's important to understand that the HPE CSI Driver for Kubernetes does not support the same amount of combinations Rancher does. Consult the support matrix on the CSI driver overview page for the supported combinations of the HPE CSI Driver, Kubernetes and supported node operating systems.
+Rancher uses Helm to deploy and manage partner software. The concept of a Helm repository in Rancher is organized under "Apps" in the Rancher UI. The HPE CSI Driver for Kubernetes is a partner solution present in the official Partner repository.
+Rancher release | +Install methods | +Recommended CSI driver | +
---|---|---|
2.7 | +Cluster Manager App Chart | +latest | +
2.8 | +Cluster Manager App Chart | +latest | +
Tip
+Learn more about Helm Charts and Apps in the Rancher documentation
+The HPE CSI Driver is part of the official Partner repository in Rancher. The CSI driver is deployed on managed Kubernetes clusters like any ordinary "App" in Rancher.
+Note
+In Rancher 2.5 an "Apps & Marketplace" component was introduced in the new "Cluster Explorer" interface. This is the new interface moving forward. Upcoming releases of the HPE CSI Driver for Kubernetes will only support installation via "Apps & Marketplace".
+Navigate to "Apps" and select "Charts", search for "HPE".
++Rancher Cluster Explorer
+For Rancher workloads to make use of persistent storage from HPE, a supported backend needs to be configured with a Secret
along with a StorageClass
. These procedures are generic regardless of Kubernetes distribution and install method being used.
Introduced in Rancher v2.7 and HPE CSI Driver for Kubernetes v2.3.0 is the ability to deploy the HPE Storage Array Exporter for Prometheus and HPE CSI Info Metrics Provider for Prometheus directly from the same Rancher Apps interface. These Helm charts have been enhanced to include support for Rancher Monitoring.
+Tip
+Make sure to tick "Enable ServiceMonitor" in the "ServiceMonitor settings" when configuring the ancillary Prometheus apps to work with Rancher Monitoring.
+VMware Tanzu Kubernetes Grid Integrated Engine (TKGI) is supported by the HPE CSI Driver for Kubernetes.
+VMware and HPE have a long standing partnership across each of the product portfolios. Allowing TKGI users to access persistent storage with the HPE CSI Driver accelerates stateful workload performance, scalability and efficiency.
+Learn more about the partnership and enablement on the VMware Marketplace.
+It's important to verify that the host OS and Kubernetes version is supported by the HPE CSI Driver.
+It's highly recommended to use the Helm chart to install the CSI driver as it's required to apply different "kubeletRootDir" than the default for the driver to start and work properly.
+Example workflow.
+
helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/
+kubectl create ns hpe-storage
+helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage \
+ --set kubeletRootDir=/var/vcap/data/kubelet
+
+Seealso
+Learn more about the supported parameters of the Helm chart on ArtifactHub.
+For TKGI workloads to make use of persistent storage from HPE, a supported backend needs to be configured along with a StorageClass
. These procedures are generic regardless of Kubernetes distribution being used.
VMware vSphere Container Storage Plug-in also known as the upstream vSphere CSI Driver exposes vSphere storage and features to Kubernetes users and was introduced in vSphere 6.7 U3. The term Cloud Native Storage (CNS) is the vCenter abstraction point and is made up of two parts, a Container Storage Interface (CSI) driver for Kubernetes used to provision storage on vSphere and the CNS Control Plane within vCenter allowing visibility to persistent volumes through the CNS UI within vCenter.
+CNS fully supports Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Profiles. This storage profile can have multiple array capabilities and data services, depending on the underlying storage you use. HPE primary storage (HPE GreenLake for Block Storage, Primera, Nimble Storage, Nimble Storage dHCI, and 3PAR) has the largest user base of vVols in the market, due to its simplicity to deploy and ease of use.
+Volume parameters available to the vSphere Container Storage Plug-in will be dependent upon options exposed through the vSphere SPBM and may not include all volume features available. Please refer to the HPE Primera: VMware ESXi Implementation Guide (includes HPE Alletra Storage MP, Alletra 9000 and 3PAR) or VMware vSphere Virtual Volumes on HPE Nimble Storage Implementation Guide (includes HPE Alletra 5000/6000 and dHCI) for list of available features.
+For a list of available volume parameters in the HPE CSI Driver for Kubernetes, refer to the respective CSP.
+Feature | +HPE CSI Driver | +vSphere Container Storage Plug-in | +
---|---|---|
vCenter Cloud Native Storage (CNS) UI Support | +No | +GA | +
Dynamic Block PV Provisioning (ReadWriteOnce access mode) | +GA | +GA (vVOL) | +
Dynamic File Provisioning (ReadWriteMany access mode) | +GA | +GA (vSan Only) | +
Volume Snapshots (CSI) | +GA | +GA (vSphere 7.0u3) | +
Volume Cloning from VolumeSnapshot (CSI) | +GA | +GA | +
Volume Cloning from PVC (CSI) | +GA | +GA | +
Volume Expansion (CSI) | +GA | +GA (vSphere 7.0u2) | +
RWO Raw Block Volume (CSI) | +GA | +GA | +
RWX/ROX Raw Block Volume (CSI) | +GA | +No | +
Generic Ephemeral Volumes (CSI) | +GA | +GA | +
Inline Ephemeral Volumes (CSI) | +GA | +No | +
Topology (CSI) | +No | +GA | +
Volume Health (CSI) | +No | +GA (vSan only) | +
CSI Controller multiple replica support | +No | +GA | +
Windows support | +No | +GA | +
Volume Encryption | +GA | +GA (via VMcrypt) | +
Volume Mutator1 | +GA | +No | +
Volume Groups1 | +GA | +No | +
Snapshot Groups1 | +GA | +No | +
Peer Persistence Replication3 | +GA | +No4 | +
+ 1 = Feature comparison based upon HPE CSI Driver for Kubernetes 2.4.0 and the vSphere Container Storage Plug-in 3.1.2
+ 2 = HPE and VMware fully support features listed as GA for their respective CSI drivers.
+ 3 = The HPE Remote Copy Peer Persistence feature of the HPE CSI Driver for Kubernetes is only available with HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR storage systems.
+ 4 = Peer Persistence is an HPE Storage specific platform feature that isn't abstracted up to the vSphere Container Storage Plug-in. Peer Persistence works with the vSphere Container Storage Plug-in when using VMFS datastores.
+
Please refer to Compatibility Matrices for vSphere Container Storage Plug-in for the most up-to-date information.
+The HPE CSI Driver for Kubernetes is only supported on specific versions of worker node operating systems and Kubernetes versions, these requirements applies to any worker VM running on vSphere.
+Some Kubernetes distributions, when running on vSphere may only support the vSphere Container Storage Plug-in, such an example is VMware Tanzu. Ensure the Kubernetes distribution being used support 3rd party CSI drivers (such as the HPE CSI Driver) and fulfill the requirements in Features and Capabilities before deciding which CSI driver to use.
+HPE does not test or qualify the vSphere Container Storage Plug-in for any particular storage backend besides point solutions1. As long as the storage platform is supported by vSphere, VMware will support the vSphere Container Storage Plug-in.
+VMware vSphere with Tanzu and HPE Alletra dHCI1
+HPE provides a turnkey solution for Kubernetes using VMware Tanzu and HPE Alletra dHCI. Learn more.
+When considering to use block storage within Kubernetes clusters running on VMware, customers need to evaluate which data protocol (FC or iSCSI) is primarily used within their virtualized environment. This will help best determine which CSI driver can be deployed within your Kubernetes clusters.
+Important
+Due to limitations when exposing physical hardware (i.e. Fibre Channel Host Bus Adapters) to virtualized guest OSs and if iSCSI is not an available, HPE recommends the use of the VMware vSphere Container Storage Plug-in to deliver block-based persistent storage from HPE GreenLake for Block Storage, Alletra, Primera, Nimble Storage, Nimble Storage dHCI or 3PAR arrays to Kubernetes clusters within VMware environments for customers who are using the Fibre Channel protocol.
+The HPE CSI Driver for Kubernetes does not support N_Port ID Virtualization (NPIV).
+Protocol | +HPE CSI Driver for Kubernetes | +vSphere Container Storage Plug-in | +
---|---|---|
FC | +Not supported | +Supported* | +
NVMe-oF | +Not supported | +Supported* | +
iSCSI | +Supported | +Supported* | +
* = Limited to the SPBM implementation of the underlying storage array.
+Learn how to deploy the vSphere Container Storage Plug-in:
+1 = The HPE authored deployment guide for vSphere Container Storage Plug-in 2.4 has been preserved here.
+Tip
+Most non-vanilla Kubernetes distributions when deployed on vSphere manage and support the vSphere Container Storage Plug-in directly. That includes Red Hat OpenShift, SUSE Rancher, Charmed Kubernetes (Canonical), Google Anthos and Amazon EKS Anywhere.
+VMware provides enterprise grade support for the vSphere Container Storage Plug-in. Please use VMware Support Services to file a customer support ticket to engage the VMware global support team.
+For support information on the HPE CSI Driver for Kubernetes, visit Support. For support with other HPE related technologies, visit the Hewlett Packard Enterprise Support Center.
+ +This deployment guide is deprecated. Learn more here.
+Cloud Native Storage (CNS) for vSphere exposes vSphere storage and features to Kubernetes users and was introduced in vSphere 6.7 U3. CNS is made up of two parts, a Container Storage Interface (CSI) driver for Kubernetes used to provision storage on vSphere and the CNS Control Plane within vCenter allowing visibility to persistent volumes through the new CNS UI within vCenter.
+CNS fully supports Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Profiles. This storage profile can have multiple array capabilities and data services, depending on the underlying storage you use. HPE primary storage (HPE Primera, Nimble Storage, Nimble Storage dHCI, and 3PAR) has the largest user base of vVols in the market, due to its simplicity to deploy and ease of use.
+Tip
+Check out the tutorial available on YouTube in the Video Gallery on how to configure and use HPE storage with Cloud Native Storage for vSphere.
+Watch the video in its entirety or skip to configuring Tanzu with HPE storage or configuring the vSphere CSI Driver with HPE storage.
+Volume parameters available to the vSphere CSI Driver will be dependent upon options exposed through the vSphere SPBM and may not include all volume features available. Please refer to the HPE Primera: VMware ESXi Implementation Guide or VMware vSphere Virtual Volumes on HPE Nimble Storage Implementation Guide for list of available features.
For a list of available volume parameters in the HPE CSI Driver for Kubernetes, refer to the respective CSP.
Feature | +HPE CSI Driver | +vSphere CSI Driver | +
---|---|---|
vCenter Cloud Native Storage (CNS) UI Support | +No | +GA | +
Dynamic Block PV Provisioning (ReadWriteOnce access mode) | +GA | +GA (vVOL) | +
Dynamic File Provisioning (ReadWriteMany access mode) | +GA | +GA (vSan Only) | +
Volume Snapshots (CSI) | +GA | +Alpha (2.4.0) | +
Volume Cloning from VolumeSnapshot (CSI) | +GA | +No | +
Volume Cloning from PVC (CSI) | +GA | +No | +
Volume Expansion (CSI) | +GA | +GA (offline only) | +
Raw Block Volume (CSI) | +GA | +Alpha | +
Generic Ephemeral Volumes (CSI) | +GA | +GA | +
Inline Ephemeral Volumes (CSI) | +GA | +No | +
Topology (CSI) | +No | +GA | +
Volume Health (CSI) | +No | +GA (vSan only) | +
CSI Controller multiple replica support | +No | +GA | +
Volume Encryption | +GA | +GA (via VMcrypt) | +
Volume Mutator1 | +GA | +No | +
Volume Groups1 | +GA | +No | +
Snapshot Groups1 | +GA | +No | +
Peer Persistence Replication3 | +GA | +No4 | +
+ 1 = Feature comparison based upon HPE CSI Driver for Kubernetes v2.1.1 and the vSphere CSI Driver v2.4.1
+ 2 = HPE and VMware fully support features listed as GA for their respective CSI drivers.
+ 3 = The HPE Remote Copy Peer Persistence feature of the HPE CSI Driver for Kubernetes is only available with HPE Alletra 9000 and Primera storage systems.
+ 4 = Peer Persistence is an HPE Storage specific platform feature that isn't abstracted up the vSphere CSI Driver. Peer Persistence works with the vSphere CSI Driver when using VMFS datastores.
+
Please refer to vSphere CSI Driver - Supported Features Matrix for the most up-to-date information.
+When considering to use block storage within Kubernetes clusters running on VMware, customers need to evaluate which data protocol (FC or iSCSI) is primarily used within their virtualized environment. This will help best determine which CSI driver can be deployed within your Kubernetes clusters.
+Important
+Due to limitations when exposing physical hardware (i.e. Fibre Channel Host Bus Adapters) to virtualized guest OSs and if iSCSI is not an available, HPE recommends the use of the VMware vSphere CSI driver to deliver block-based persistent storage from HPE Primera, Nimble Storage, Nimble Storage dHCI or 3PAR arrays to Kubernetes clusters within VMware environments for customers who are using the Fibre Channel protocol.
+
+The HPE CSI Driver for Kubernetes does not support N_Port ID Virtualization (NPIV).
Protocol | +HPE CSI Driver for Kubernetes | +vSphere CSI driver | +
---|---|---|
FC | +Not supported | +Supported* | +
iSCSI | +Supported | +Supported* | +
*
= Limited to the SPBM implementation of the underlying storage array
This guide will cover the configuration and deployment of the vSphere CSI driver. Cloud Native Storage for vSphere uses the VASA provider and Storage Policy Based Management (SPBM) to create First Class Disks on supported arrays.
+CNS supports VMware vSphere 6.7 U3 and higher.
+Refer to the following guides to configure the VASA provider and create a vVol Datastore.
+Storage Array | +Guide | +
---|---|
HPE Alletra 9000 | +HPE Alletra 9000: VMware ESXi Implementation Guide | +
HPE Primera | +VMware vVols with HPE Primera Storage | +
HPE Nimble Storage | +Working with VMware Virtual Volumes | +
HPE Nimble Storage dHCI & HPE Alletra 5000/6000 | +HPE Nimble Storage dHCI and VMware vSphere New Servers Deployment Guide | +
HPE 3PAR | +Implementing VMware Virtual Volumes on HPE 3PAR StoreServ | +
Once the vVol Datastore is created, create a VM Storage Policy. From the vSphere Web Client, click Menu and select Policies and Profiles.
+ +Click on VM Storage Policies, and then click Create.
+ +Next provide a name for the policy. Click NEXT.
+ +Under Datastore specific rules, select either:
+Click NEXT.
+ +Next click ADD RULE. Choose from the various options available to your array.
+ +Below is an example of a VM Storage Policy for Primera. This may vary depending on your requirements and options available within your array. Once complete, click NEXT.
+ +Under Storage compatibility, verify the correct vVol datastore is shown as compatible to the options chosen in the previous screen. Click NEXT.
+ +Verify everything looks correct and click FINISH. Repeat this process for any additional Storage Policies you may need.
+ +Now that we have configured a Storage Policy, we can proceed with the deployment of the vSphere CSI driver.
+This is adapted from the following tutorial, please read over to understand all of the vSphere, firewall and guest OS requirements.
+ +Note
+The following is a simplified single-site configuration to demonstrate how to deploy the vSphere CPI and CSI drivers. Make sure to adapt the configuration to match your environment and needs.
+Check if ProviderID
is already configured on your cluster.
kubectl get nodes -o jsonpath='{range .items[*]}{.spec.providerID}{"\n"}{end}'
+
+If this command returns empty, then proceed with configuring the vSphere Cloud Provider.
+If the ProviderID
is set, then you can proceed directly to installing the vSphere CSI Driver.
$ kubectl get nodes -o jsonpath='{range .items[*]}{.spec.providerID}{"\n"}{end}'
+vsphere://4238c1a1-e72f-74bf-db48-0d9f4da3e9c9
+vsphere://4238ede5-50e1-29b6-1337-be8746a5016c
+vsphere://4238c6dc-3806-ce36-fd14-5eefe830b227
+
+Create a vsphere.conf
file.
Note
+The vsphere.conf
is a hardcoded filename used by the vSphere Cloud Provider. Do not change it otherwise the Cloud Provider will not deploy correctly.
Set the vCenter server FQDN or IP and vSphere datacenter object name to match your environment.
+Copy and paste the following.
+
# Global properties in this section will be used for all specified vCenters unless overridden in vCenter section.
+global:
+ port: 443
+ # Set insecureFlag to true if the vCenter uses a self-signed cert
+ insecureFlag: true
+ # Where to find the Secret used for authentication to vCenter
+ secretName: cpi-global-secret
+ secretNamespace: kube-system
+
+# vcenter section
+vcenter:
+ tenant-k8s:
+ server: <vCenter FQDN or IP>
+ datacenters:
+ - <vCenter Datacenter name>
+
+Create the ConfigMap
from the vsphere.conf
file.
kubectl create configmap cloud-config --from-file=vsphere.conf -n kube-system
+
+The below YAML declarations are meant to be created with kubectl create
. Either copy the content to a file on the host where kubectl
is being executed, or copy & paste into the terminal, like this:
kubectl create -f-
+< paste the YAML >
+^D (CTRL + D)
+
+Next create the CPI Secret
.
apiVersion: v1
+kind: Secret
+metadata:
+ name: cpi-global-secret
+ namespace: kube-system
+stringData:
+ <vCenter FQDN or IP>.username: "Administrator@vsphere.local"
+ <vCenter FQDN or IP>.password: "VMware1!"
+
+Note
+The username and password within the Secret
are case-sensitive.
Inspect the Secret
to verify it was created successfully.
kubectl describe secret cpi-global-secret -n kube-system
+
+The output is similar to this:
+
Name: cpi-global-secret
+Namespace: kube-system
+Labels: <none>
+Annotations: <none>
+
+Type: Opaque
+
+Data
+====
+vcenter.example.com.password: 8 bytes
+vcenter.example.com.username: 27 bytes
+
+Before installing vSphere Cloud Controller Manager, make sure all nodes are tainted with node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
. When the kubelet
is started with “external” cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud provider initializes this node, the kubelet
removes this taint.
To find your node names, run the following command.
+
kubectl get nodes
+
+NAME STATUS ROLES AGE VERSION
+cp1 Ready control-plane,master 46m v1.20.1
+node1 Ready <none> 44m v1.20.1
+node2 Ready <none> 44m v1.20.1
+
+To create the taint, run the following command for each node in your cluster.
+
kubectl taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
+
+Verify the taint has been applied to each node.
+
kubectl describe nodes | egrep "Taints:|Name:"
+
+The output is similar to this:
+
Name: cp1
+Taints: node-role.kubernetes.io/master:NoSchedule
+Name: node1
+Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
+Name: node2
+Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
+
+There are 3 manifests that must be deployed to install the vSphere Cloud Provider Interface (CPI). The following example applies the RBAC roles and the RBAC bindings to your Kubernetes cluster. It also deploys the Cloud Controller Manager in a DaemonSet.
+
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml
+
+Verify vsphere-cloud-controller-manager
is running.
kubectl rollout status ds/vsphere-cloud-controller-manager -n kube-system
+daemon set "vsphere-cloud-controller-manager" successfully rolled out
+
+Note
+If you happen to make an error with the vsphere.conf, simply delete the CPI components and the ConfigMap
, make any necessary edits to the vsphere.conf file, and reapply the steps above.
Now that the CPI is installed, we can proceed with deploying the vSphere CSI driver.
+The following has been adapted from the vSphere CSI driver installation guide. Refer to the official documentation for additional information on how to deploy the vSphere CSI driver.
+ +Since we are connecting to block storage provided from an HPE Primera, Nimble Storage, Nimble Storage dHCI or 3PAR array, we will create a configuration file for block volumes.
+Create a csi-vsphere.conf file.
+Copy and paste the following:
+
[Global]
+cluster-id = "csi-vsphere-cluster"
+
+[VirtualCenter "<IP or FQDN>"]
+insecure-flag = "true"
+user = "Administrator@vsphere.local"
+password = "VMware1!"
+port = "443"
+datacenters = "<vCenter datacenter>"
+
+Create a Kubernetes Secret
that will contain the configuration details to connect to your vSphere environment.
kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf -n kube-system
+
+Verify that the Secret
was created successfully.
kubectl get secret vsphere-config-secret -n kube-system
+NAME TYPE DATA AGE
+vsphere-config-secret Opaque 1 43s
+
+For security purposes, it is advised to remove the csi-vsphere.conf file.
+Deployment
and vSphere CSI node DaemonSet
¶Check the official vSphere CSI Driver Github repo for the latest version.
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/deploy/vsphere-csi-controller-deployment.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/deploy/vsphere-csi-node-ds.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/rbac/vsphere-csi-controller-rbac.yaml
+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/deploy/vsphere-csi-controller-deployment.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/deploy/vsphere-csi-node-ds.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/rbac/vsphere-csi-controller-rbac.yaml
+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-controller-deployment.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-node-ds.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/rbac/vsphere-csi-controller-rbac.yaml
+
Verify that the vSphere CSI driver has been successfully deployed using kubectl rollout status
.
kubectl rollout status deployment/vsphere-csi-controller -n kube-system
+deployment "vsphere-csi-controller" successfully rolled out
+
+kubectl rollout status ds/vsphere-csi-node -n kube-system
+daemon set "vsphere-csi-node" successfully rolled out
+
+Verify that the vSphere CSI driver CustomResourceDefinition
has been registered with Kubernetes.
kubectl describe csidriver/csi.vsphere.vmware.com
+Name: csi.vsphere.vmware.com
+Namespace:
+Labels: <none>
+Annotations: <none>
+API Version: storage.k8s.io/v1
+Kind: CSIDriver
+Metadata:
+ Creation Timestamp: 2020-11-21T06:27:23Z
+ Managed Fields:
+ API Version: storage.k8s.io/v1beta1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ f:attachRequired:
+ f:podInfoOnMount:
+ f:volumeLifecycleModes:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2020-11-21T06:27:23Z
+ Resource Version: 217131
+ Self Link: /apis/storage.k8s.io/v1/csidrivers/csi.vsphere.vmware.com
+ UID: bcda2b5c-3c38-4256-9b91-5ed248395113
+Spec:
+ Attach Required: true
+ Pod Info On Mount: false
+ Volume Lifecycle Modes:
+ Persistent
+Events: <none>
+
+Also verify that the vSphere CSINodes CustomResourceDefinition
has been created.
kubectl get csinodes -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.drivers[].name}{"\n"}{end}'
+cp1 csi.vsphere.vmware.com
+node1 csi.vsphere.vmware.com
+node2 csi.vsphere.vmware.com
+
+If there are no errors, the vSphere CSI driver has been successfully deployed.
+With the vSphere CSI driver deployed, lets create a StorageClass
that can be used by the CSI driver.
Important
+The following steps will be using the example VM Storage Policy created at the beginning of this guide. If you do not have a Storage Policy available, refer to Configuring a VM Storage Policy before proceeding to the next steps.
+
kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: primera-default-sc
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+provisioner: csi.vsphere.vmware.com
+parameters:
+ storagepolicyname: "primera-default-profile"
+
+With the vSphere CSI driver deployed and a StorageClass
available, lets run through some tests to verify it is working correctly.
In this example, we will be deploying a stateful MongoDB application with 3 replicas. The persistent volumes deployed by the vSphere CSI driver will be created using the VM Storage Policy and placed on a compatible vVol datastore.
+This is an example MongoDB chart using a StatefulSet. The default volume size is 8Gi, if you want to change that use --set persistence.size=50Gi
.
helm install mongodb \
+ --set architecture=replicaset \
+ --set replicaSetName=mongod \
+ --set replicaCount=3 \
+ --set auth.rootPassword=secretpassword \
+ --set auth.username=my-user \
+ --set auth.password=my-password \
+ --set auth.database=my-database \
+ bitnami/mongodb
+
+Verify that the MongoDB application has been deployed. Wait for pods to start running and PVCs to be created for each replica.
+
kubectl rollout status sts/mongodb
+
+Inspect the Pods
and PersistentVolumeClaims
.
kubectl get pods,pvc
+NAME READY STATUS RESTARTS AGE
+mongod-0 1/1 Running 0 90s
+mongod-1 1/1 Running 0 71s
+mongod-2 1/1 Running 0 44s
+
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+datadir-mongodb-0 Bound pvc-fd3994fb-a5fb-460b-ab17-608a71cdc337 50Gi RWO primera-default-sc 13m
+datadir-mongodb-1 Bound pvc-a3755dbe-210d-4c7b-8ac1-bb0607a2c537 50Gi RWO primera-default-sc 13m
+datadir-mongodb-2 Bound pvc-22bab0f4-8240-48c1-91b1-3495d038533e 50Gi RWO primera-default-sc 13m
+
+To interact with the Mongo replica set, you can connect to the StatefulSet.
+
kubectl exec -it sts/mongod bash
+
+root@mongod-0:/# df -h /bitnami/mongodb
+Filesystem Size Used Avail Use% Mounted on
+/dev/sdb 49G 374M 47G 1% /bitnami/mongodb
+
+We can see that the vSphere CSI driver has successfully provisioned and mounted the persistent volume to /bitnami/mongodb.
+Verify that the volumes are now visible within the Cloud Native Storage interface by logging into the vSphere Web Client.
+Click on Datacenter, then the Monitor tab. Expand Cloud Native Storage and highlight Container Volumes.
+From here, we can see the persistent volumes that were created as part of our MongoDB deployment. These should match the kubectl get pvc
output from earlier. You can also monitor their storage policy compliance status.
This concludes the validations and verifies that all components of vSphere CNS (vSphere CPI and vSphere CSI drivers) are deployed and working correctly.
+VMware provides enterprise grade support for the vSphere CSI driver. Please use VMware Support Services to file a customer support ticket to engage the VMware global support team.
+For support information on the HPE CSI Driver for Kubernetes, visit Support. For support with other HPE related technologies, visit the Hewlett Packard Enterprise Support Center.
+ +' + escapeHtml(summary) +'
' + noResultsText + '
'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..da6cda2a --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"index.html","text":"HPE Storage Container Orchestrator Documentation \u00b6 This is an umbrella documentation project for the HPE CSI Driver for Kubernetes and neighboring ecosystems for HPE primary storage including HPE Alletra Storage MP, Alletra 9000, Alletra 5000/6000, Nimble Storage, Primera and 3PAR storage systems. The documentation is tailored for IT Ops, developers and technology partners. Use the navigation on the left-hand side to explore the different topics. Feel free to contribute to this project but please read the contributing guidelines . Use the navigation to the left. Not sure what you're looking for? \u2192 Get started ! Did you know? SCOD is \"docs\" in reverse?","title":"Home"},{"location":"index.html#hpe_storage_container_orchestrator_documentation","text":"This is an umbrella documentation project for the HPE CSI Driver for Kubernetes and neighboring ecosystems for HPE primary storage including HPE Alletra Storage MP, Alletra 9000, Alletra 5000/6000, Nimble Storage, Primera and 3PAR storage systems. The documentation is tailored for IT Ops, developers and technology partners. Use the navigation on the left-hand side to explore the different topics. Feel free to contribute to this project but please read the contributing guidelines . Use the navigation to the left. Not sure what you're looking for? \u2192 Get started ! Did you know? SCOD is \"docs\" in reverse?","title":"HPE Storage Container Orchestrator Documentation"},{"location":"container_storage_provider/index.html","text":"Container Storage Providers \u00b6 HPE Alletra 5000/6000 and Nimble Storage HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR","title":"Container Storage Providers"},{"location":"container_storage_provider/index.html#container_storage_providers","text":"HPE Alletra 5000/6000 and Nimble Storage HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR","title":"Container Storage Providers"},{"location":"container_storage_provider/hpe_alletra_6000/index.html","text":"Introduction \u00b6 The HPE Alletra 5000/6000 and Nimble Storage Container Storage Provider (\"CSP\") for Kubernetes is the reference implementation for the HPE CSI Driver for Kubernetes. The CSP abstracts the data management capabilities of the array for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass and VolumeSnapshotClass parameters but also contains important array setup requirements. Important For a successful deployment, it's important to understand the array platform requirements found within the CSI driver (compute node OS and Kubernetes versions) and the CSP. Introduction Platform Requirements Setting Up the Array Single Tenant Deployment Multitenant Deployment Tenant Limitations Limitations StorageClass Parameters Common Parameters for Provisioning and Cloning Provisioning Parameters Cloning Parameters Import Parameters Pod Inline Volume Parameters (Local Ephemeral Volumes) VolumeGroupClass Parameters VolumeSnapshotClass Parameters Static Provisioning Persistent Volume Persistent Volume Claim Seealso There's a brief introduction on how to use HPE Nimble Storage with the HPE CSI Driver in the Video Gallery. It also applies broadly to HPE Alletra 5000/6000. Platform Requirements \u00b6 Always check the corresponding CSI driver version in compatibility and support for the required array Operating System (\"OS\") version for a particular release of the driver. If a certain feature is gated against a certain version of the array OS it will be called out where applicable. Tip The documentation reflected here always corresponds to the latest supported version and may contain references to future features and capabilities. Setting Up the Array \u00b6 How to deploy an HPE storage array is beyond the scope of this document. Please refer to HPE InfoSight for further reading. Important The HPE Nimble Storage Linux Toolkit (NLT) is not compatible with the HPE CSI Driver for Kubernetes. Do not install NLT on Kubernetes compute nodes. It may be installed on Kubernetes control plane nodes if they use iSCSI or FC storage from the array. Single Tenant Deployment \u00b6 The CSP requires access to a user with either poweruser or the administrator role. It's recommended to use the poweruser role for least privilege practices. Tip It's highly recommended to deploy a multitenant setup. Multitenant Deployment \u00b6 In array OS 6.0.0 and newer it's possible to create separate tenants using the tenantadmin CLI to assign folders to a tenant. This creates a secure and logical separation of storage resources between Kubernetes clusters. No special configuration is needed on the Kubernetes cluster when using a tenant account or a regular user account. It's important to understand from a provisioning perspective that if the tenant account being used has been assigned multiple folders, the CSP will pick the folder with the most space available. If this is not desirable and a 1:1 StorageClass to Folder mapping is needed, the \"folder\" parameter needs to be called out in the StorageClass . For reference, as of array OS 6.0.0, this is the tenantadmin command synopsis. $ tenantadmin --help Usage: tenantadmin [options] Manage Tenants. Available options are: --help Program help. --list List Tenants. --info name Tenant info. --add tenant_name Add a tenant. --folders folders List of folder paths (comma separated pool_name:fqn) the tenant will be able to access (mandatory). --remove name Remove a tenant. --add_folder tenant_name Add a folder path for tenant access. --name folder_name Name of the folder path (pool_name:fqn) to be added (mandatory). --remove_folder tenant_name Remove a folder path from tenant access. --name folder_name Name of the folder path (pool_name:fqn) to be removed (mandatory). --passwd Change tenant's login password. --tenant name Change a specific tenant's login password (mandatory). Caution The tenantadmin command may only be run by local array OS administrators. LDAP or Active Directory accounts, regardless of role, are not supported. Visit the array admin guide on HPE InfoSight to learn more about how to use the tenantadmin CLI. Tenant Limitations \u00b6 Some features may be limited and restricted in a multitenant deployment, such as arbitrarily import volumes in folders from the array the tenant isn't a user of, here are a few less obvious limitations. CHAP is configured globally for the CSI driver. The CSI driver is contracted to create the CHAP user if it doesn't exist. It's important that the CHAP user does not exist prior when used with a tenant, as tenant may not share CHAP users among themselves or the admin account. Both port 443 and 5392 needs to be exposed to the Kubernetes cluster in multitenant deployments. Seealso An in-depth tutorial on how to use multitenancy and the tenantadmin CLI is available on HPE Developer: Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble Storage . There's also a high level overview of multitenancy available as a lightboard presentation on YouTube . Limitations \u00b6 Consult the compatibility and support table for supported array OS versions. CSI and CSP specific limitations are listed below. Striped volumes on grouped arrays are not supported by the CSI driver. The CSP is not capable of provisioning or importing volumes protected by Peer Persistence. When using an FC only array and provisioning RWX block volumes, the \"multi_initiator\" attribute won't get set properly on the volume. The workaround is to run group --edit --iscsi_enabled yes on the Array OS CLI. StorageClass Parameters \u00b6 A StorageClass is used to provision or clone a persistent volume. It can also be used to import an existing volume or clone a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. Common parameters for provisioning and cloning Provisioning Parameters Cloning Parameters Import Parameters Pod Inline Volume Parameters (Local Ephemeral Volumes) VolumeGroupClass Parameters VolumeSnapshotClass Parameters Backward compatibility with the HPE Nimble Storage FlexVolume driver is being honored to a certain degree. StorageClass API objects needs be rewritten and parameters need to be updated regardless. Please see using the HPE CSI Driver for base StorageClass examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities. Note These are optional parameters unless specified. Common Parameters for Provisioning and Cloning \u00b6 These parameters are mutable between a parent volume and creating a clone from a snapshot. Parameter String Description accessProtocol 1 Text The access protocol to use when accessing the persistent volume (\"fc\" or \"iscsi\"). Defaults to \"iscsi\" when unspecified. destroyOnDelete Boolean Indicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to \"false\" which means volumes needs to be pruned manually. limitIops Integer The IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default). limitMbps Integer The MB/s throughput limit for the volume between 1 and 4294967294, or -1 for unlimited (default). description Text Text to be added to the volume's description on the array. Empty string by default. performancePolicy 2 Text The name of the performance policy to assign to the volume. Default example performance policies include \"Backup Repository\", \"Exchange 2003 data store\", \"Exchange 2007 data store\", \"Exchange 2010 data store\", \"Exchange log\", \"Oracle OLTP\", \"Other Workloads\", \"SharePoint\", \"SQL Server\", \"SQL Server 2012\", \"SQL Server Logs\". Defaults to the \"default\" performance policy. protectionTemplate 4 Text The name of the protection template to assign to the volume. Default examples of protection templates include \"Retain-30Daily\", \"Retain-48Hourly-30Daily-52Weekly\", and \"Retain-90Daily\". folder Text The name of the folder in which to place the volume. Defaults to the root of the \"default\" pool. thick Boolean Indicates that the volume should be thick provisioned. Defaults to \"false\" dedupeEnabled 3 Boolean Indicates that the volume should enable deduplication. Defaults to \"true\" when available. syncOnDetach Boolean Indicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node. Defaults to \"false\". Restrictions applicable when using the CSI volume mutator : 1 = Parameter is immutable and can't be altered after provisioning/cloning. 2 = Performance policies may only be mutated between performance polices with the same block size. 3 = Deduplication may only be mutated within the same performance policy application category and block size. 4 = This parameter was removed in HPE CSI Driver 1.4.0 and replaced with VolumeGroupClasses . Note Performance Policies, Folders and Protection Templates are array OS specific constructs that can be created on the array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight . Provisioning Parameters \u00b6 These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes. Parameter String Description encrypted Boolean Indicates that the volume should be encrypted. Defaults to \"false\". pool Text The name of the pool in which to place the volume. Defaults to the \"default\" pool. Cloning Parameters \u00b6 Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference an array volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. Import Parameters \u00b6 Importing volumes to Kubernetes requires the source array volume to be offline. In case of reverse replication, the upstream volume should be in offline state. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE CSI Driver. Parameter String Description importVolumeName Text The name of the array volume to import. snapshot Text The name of the array snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored. takeover Boolean Indicates the current group will takeover ownership of the array volume and volume collection. This should be performed against a downstream replica. reverseReplication Boolean Reverses the replication direction so that writes to the array volume are replicated back to the group where it was replicated from. forceImport Boolean Forces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead. Seealso In this HPE Developer blog post you'll learn how to use the import parameters to lift and transform applications from traditional infrastructure to Kubernetes using the HPE CSI Driver. Pod Inline Volume Parameters (Local Ephemeral Volumes) \u00b6 These parameters are applicable only for Pod inline volumes and to be specified within Pod spec. Parameter String Description csi.storage.k8s.io/ephemeral Boolean Indicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to \"true\". inline-volume-secret-name Text A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. inline-volume-secret-namespace Text The namespace of inline-volume-secret-name for ephemeral inline volume. size Text The size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used. accessProtocol Text Storage access protocol to use, \"iscsi\" or \"fc\". Important All parameters are required for inline ephemeral volumes. VolumeGroupClass Parameters \u00b6 If basic data protection is required and performed on the array, VolumeGroups needs to be created, even it's just a single volume that needs data protection using snapshots and replication. Learn more about VolumeGroups in the provisioning concepts documentation . Parameter String Description description Text Text to be added to the volume collection description on the array. Empty by default. protectionTemplate Text The name of the protection template to assign to the volume collection. Default examples of protection templates include \"Retain-30Daily\", \"Retain-48Hourly-30Daily-52Weekly\", and \"Retain-90Daily\". Empty by default, meaning no array snapshots are performed on the VolumeGroups . New feature VolumeGroupClasses were introduced with version 1.4.0 of the CSI driver. Learn more in the Using section . VolumeSnapshotClass Parameters \u00b6 These parametes are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description description Text Text to be added to the snapshot's description on the array. writable Boolean Indicates if the snapshot is writable on the array. Defaults to \"false\". online Boolean Indicates if the snapshot is set to online on the array. Defaults to \"false\". Static Provisioning \u00b6 Static provisioning of PVs and PVCs may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass . Persistent Volume \u00b6 Create a PV referencing an existing 10GiB volume on the array, replace .spec.csi.volumeHandle with the array volume ID. Warning If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem. apiVersion: v1 kind: PersistentVolume metadata: name: my-static-pv-1 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi csi: volumeHandle:HPE provides a broad portfolio of products that integrate with Kubernetes and neighboring ecosystems. The following table provides an overview of integrations available for each primary storage platform.
+Ecosystem | +HPE Alletra 5000/6000 and Nimble |
+ HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR |
+
---|---|---|
Kubernetes | +HPE CSI Driver with Alletra 6000 CSP | +HPE CSI Driver with Alletra Storage MP CSP | +
Interested in acquiring a persistent storage solution for your Kubernetes project?
+Criteria | +HPE Alletra 5000/6000 |
+ HPE Alletra Storage MP |
+
---|---|---|
Availability | +99.9999% | +100% | +
Workloads | +Business-critical | +Mission-critical | +
Learn more | +hpe.com/storage/alletra | +hpe.com/storage/greenlake | +
Can't find what you're looking for? Check out hpe.com/storage for additional HPE storage platforms.
+ +