diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..a8b77e41 --- /dev/null +++ b/404.html @@ -0,0 +1,217 @@ + + + + + + + + SCOD.HPEDEV.IO + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • +
  • +
+
+
+
+
+ + +

404

+ +

Page not found

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/CNAME b/CNAME new file mode 100644 index 00000000..19e61628 --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +scod.hpedev.io diff --git a/container_storage_provider/hpe_3par_primera/index.html b/container_storage_provider/hpe_3par_primera/index.html new file mode 100644 index 00000000..d6094041 --- /dev/null +++ b/container_storage_provider/hpe_3par_primera/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/container_storage_provider/hpe_alletra_6000/index.html b/container_storage_provider/hpe_alletra_6000/index.html new file mode 100644 index 00000000..e3b1df8a --- /dev/null +++ b/container_storage_provider/hpe_alletra_6000/index.html @@ -0,0 +1,756 @@ + + + + + + + + + + + + + + + + + + HPE Alletra 5000/6000 and Nimble - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • CONTAINER STORAGE PROVIDERS »
  • +
  • HPE Alletra 5000/6000 and Nimble
  • +
  • +
  • +
+
+
+
+
+ +

Introduction

+

The HPE Alletra 5000/6000 and Nimble Storage Container Storage Provider ("CSP") for Kubernetes is the reference implementation for the HPE CSI Driver for Kubernetes. The CSP abstracts the data management capabilities of the array for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass and VolumeSnapshotClass parameters but also contains important array setup requirements.

+
+

Important

+

For a successful deployment, it's important to understand the array platform requirements found within the CSI driver (compute node OS and Kubernetes versions) and the CSP.

+
+ +
+

Seealso

+

There's a brief introduction on how to use HPE Nimble Storage with the HPE CSI Driver in the Video Gallery. It also applies broadly to HPE Alletra 5000/6000.

+
+

Platform Requirements

+

Always check the corresponding CSI driver version in compatibility and support for the required array Operating System ("OS") version for a particular release of the driver. If a certain feature is gated against a certain version of the array OS it will be called out where applicable.

+
+

Tip

+

The documentation reflected here always corresponds to the latest supported version and may contain references to future features and capabilities.

+
+

Setting Up the Array

+

How to deploy an HPE storage array is beyond the scope of this document. Please refer to HPE InfoSight for further reading.

+
+

Important

+

The HPE Nimble Storage Linux Toolkit (NLT) is not compatible with the HPE CSI Driver for Kubernetes. Do not install NLT on Kubernetes compute nodes. It may be installed on Kubernetes control plane nodes if they use iSCSI or FC storage from the array.

+
+

Single Tenant Deployment

+

The CSP requires access to a user with either poweruser or the administrator role. It's recommended to use the poweruser role for least privilege practices.

+
+

Tip

+

It's highly recommended to deploy a multitenant setup.

+
+

Multitenant Deployment

+

In array OS 6.0.0 and newer it's possible to create separate tenants using the tenantadmin CLI to assign folders to a tenant. This creates a secure and logical separation of storage resources between Kubernetes clusters.

+

No special configuration is needed on the Kubernetes cluster when using a tenant account or a regular user account. It's important to understand from a provisioning perspective that if the tenant account being used has been assigned multiple folders, the CSP will pick the folder with the most space available. If this is not desirable and a 1:1 StorageClass to Folder mapping is needed, the "folder" parameter needs to be called out in the StorageClass.

+

For reference, as of array OS 6.0.0, this is the tenantadmin command synopsis.

+

$ tenantadmin --help
+Usage: tenantadmin [options]
+Manage Tenants.
+
+Available options are:
+  --help                           Program help.
+
+  --list                           List Tenants.
+
+  --info name                      Tenant info.
+
+  --add tenant_name                Add a tenant.
+    --folders folders              List of folder paths (comma separated
+                                   pool_name:fqn) the tenant will be able to
+                                   access (mandatory).
+
+  --remove name                    Remove a tenant.
+
+  --add_folder tenant_name         Add a folder path for tenant access.
+    --name folder_name             Name of the folder path (pool_name:fqn) to
+                                   be added (mandatory).
+
+  --remove_folder tenant_name      Remove a folder path from tenant access.
+    --name folder_name             Name of the folder path (pool_name:fqn) to
+                                   be removed (mandatory).
+
+  --passwd                         Change tenant's login password.
+    --tenant name                  Change a specific tenant's login password
+                                   (mandatory).
+

+
+

Caution

+

The tenantadmin command may only be run by local array OS administrators. LDAP or Active Directory accounts, regardless of role, are not supported.

+
+
    +
  • Visit the array admin guide on HPE InfoSight to learn more about how to use the tenantadmin CLI.
  • +
+
Tenant Limitations
+

Some features may be limited and restricted in a multitenant deployment, such as arbitrarily import volumes in folders from the array the tenant isn't a user of, here are a few less obvious limitations.

+
    +
  • CHAP is configured globally for the CSI driver. The CSI driver is contracted to create the CHAP user if it doesn't exist. It's important that the CHAP user does not exist prior when used with a tenant, as tenant may not share CHAP users among themselves or the admin account.
  • +
  • Both port 443 and 5392 needs to be exposed to the Kubernetes cluster in multitenant deployments.
  • +
+
+

Seealso

+

An in-depth tutorial on how to use multitenancy and the tenantadmin CLI is available on HPE Developer: Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble Storage. There's also a high level overview of multitenancy available as a lightboard presentation on YouTube.

+
+

Limitations

+

Consult the compatibility and support table for supported array OS versions. CSI and CSP specific limitations are listed below.

+
    +
  • Striped volumes on grouped arrays are not supported by the CSI driver.
  • +
  • The CSP is not capable of provisioning or importing volumes protected by Peer Persistence.
  • +
  • When using an FC only array and provisioning RWX block volumes, the "multi_initiator" attribute won't get set properly on the volume. The workaround is to run group --edit --iscsi_enabled yes on the Array OS CLI.
  • +
+

StorageClass Parameters

+

A StorageClass is used to provision or clone a persistent volume. It can also be used to import an existing volume or clone a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.

+ +

Backward compatibility with the HPE Nimble Storage FlexVolume driver is being honored to a certain degree. StorageClass API objects needs be rewritten and parameters need to be updated regardless.

+

Please see using the HPE CSI Driver for base StorageClass examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities.

+
+

Note

+

These are optional parameters unless specified.

+
+

Common Parameters for Provisioning and Cloning

+

These parameters are mutable between a parent volume and creating a clone from a snapshot.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
accessProtocol1TextThe access protocol to use when accessing the persistent volume ("fc" or "iscsi"). Defaults to "iscsi" when unspecified.
destroyOnDeleteBooleanIndicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to "false" which means volumes needs to be pruned manually.
limitIopsIntegerThe IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default).
limitMbpsIntegerThe MB/s throughput limit for the volume between 1 and 4294967294, or -1 for unlimited (default).
descriptionTextText to be added to the volume's description on the array. Empty string by default.
performancePolicy2TextThe name of the performance policy to assign to the volume. Default example performance policies include "Backup Repository", "Exchange 2003 data store", "Exchange 2007 data store", "Exchange 2010 data store", "Exchange log", "Oracle OLTP", "Other Workloads", "SharePoint", "SQL Server", "SQL Server 2012", "SQL Server Logs". Defaults to the "default" performance policy.
protectionTemplate4TextThe name of the protection template to assign to the volume. Default examples of protection templates include "Retain-30Daily", "Retain-48Hourly-30Daily-52Weekly", and "Retain-90Daily".
folderTextThe name of the folder in which to place the volume. Defaults to the root of the "default" pool.
thickBooleanIndicates that the volume should be thick provisioned. Defaults to "false"
dedupeEnabled3BooleanIndicates that the volume should enable deduplication. Defaults to "true" when available.
syncOnDetachBooleanIndicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node. Defaults to "false".
+

+ Restrictions applicable when using the CSI volume mutator: +
1 = Parameter is immutable and can't be altered after provisioning/cloning. +
2 = Performance policies may only be mutated between performance polices with the same block size. +
3 = Deduplication may only be mutated within the same performance policy application category and block size. +
4 = This parameter was removed in HPE CSI Driver 1.4.0 and replaced with VolumeGroupClasses. +

+
+

Note

+

Performance Policies, Folders and Protection Templates are array OS specific constructs that can be created on the array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight.

+
+

Provisioning Parameters

+

These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes.

+ + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
encryptedBooleanIndicates that the volume should be encrypted. Defaults to "false".
poolTextThe name of the pool in which to place the volume. Defaults to the "default" pool.
+

Cloning Parameters

+

Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference an array volume name to clone and import to Kubernetes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
cloneOfTextThe name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive.
importVolAsCloneTextThe name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive.
snapshotTextThe name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created.
createSnapshotBooleanIndicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created.
+

Import Parameters

+

Importing volumes to Kubernetes requires the source array volume to be offline. In case of reverse replication, the upstream volume should be in offline state. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE CSI Driver.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
importVolumeNameTextThe name of the array volume to import.
snapshotTextThe name of the array snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored.
takeoverBooleanIndicates the current group will takeover ownership of the array volume and volume collection. This should be performed against a downstream replica.
reverseReplicationBooleanReverses the replication direction so that writes to the array volume are replicated back to the group where it was replicated from.
forceImportBooleanForces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead.
+
+

Seealso

+

In this HPE Developer blog post you'll learn how to use the import parameters to lift and transform applications from traditional infrastructure to Kubernetes using the HPE CSI Driver.

+
+

Pod Inline Volume Parameters (Local Ephemeral Volumes)

+

These parameters are applicable only for Pod inline volumes and to be specified within Pod spec.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
csi.storage.k8s.io/ephemeralBooleanIndicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to "true".
inline-volume-secret-nameTextA reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call.
inline-volume-secret-namespaceTextThe namespace of inline-volume-secret-name for ephemeral inline volume.
sizeTextThe size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used.
accessProtocolTextStorage access protocol to use, "iscsi" or "fc".
+
+

Important

+

All parameters are required for inline ephemeral volumes.

+
+

VolumeGroupClass Parameters

+

If basic data protection is required and performed on the array, VolumeGroups needs to be created, even it's just a single volume that needs data protection using snapshots and replication. Learn more about VolumeGroups in the provisioning concepts documentation.

+ + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
descriptionTextText to be added to the volume collection description on the array. Empty by default.
protectionTemplateTextThe name of the protection template to assign to the volume collection. Default examples of protection templates include "Retain-30Daily", "Retain-48Hourly-30Daily-52Weekly", and "Retain-90Daily". Empty by default, meaning no array snapshots are performed on the VolumeGroups.
+
+

New feature

+

VolumeGroupClasses were introduced with version 1.4.0 of the CSI driver. Learn more in the Using section.

+
+

VolumeSnapshotClass Parameters

+

These parametes are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information.

+

How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
descriptionTextText to be added to the snapshot's description on the array.
writableBooleanIndicates if the snapshot is writable on the array. Defaults to "false".
onlineBooleanIndicates if the snapshot is set to online on the array. Defaults to "false".
+

Static Provisioning

+

Static provisioning of PVs and PVCs may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass.

+

Persistent Volume

+

Create a PV referencing an existing 10GiB volume on the array, replace .spec.csi.volumeHandle with the array volume ID.

+
+

Warning

+

If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem.

+
+

apiVersion: v1
+kind: PersistentVolume
+metadata:
+  name: my-static-pv-1
+spec:
+  accessModes:
+  - ReadWriteOnce
+  capacity:
+    storage: 10Gi
+  csi:
+    volumeHandle: <insert volume ID here>
+    driver: csi.hpe.com
+    fsType: xfs
+    volumeAttributes:
+      volumeAccessMode: mount
+      fsType: xfs
+    controllerPublishSecretRef:
+      name: hpe-backend
+      namespace: hpe-storage
+    nodePublishSecretRef:
+      name: hpe-backend
+      namespace: hpe-storage
+    controllerExpandSecretRef:
+      name: hpe-backend
+      namespace: hpe-storage
+  persistentVolumeReclaimPolicy: Retain
+  volumeMode: Filesystem
+

+
+

Tip

+

Remove .spec.csi.controllerExpandSecretRef to disallow volume expansion.

+
+

Persistent Volume Claim

+

Now, a user may claim the static PV by creating a PVC referencing the PV name in .spec.volumeName.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-pvc
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 10Gi
+  volumeName: my-static-pv-1
+  storageClassName: ""
+

+ +
+
+ + + +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/container_storage_provider/hpe_alletra_9000/index.html b/container_storage_provider/hpe_alletra_9000/index.html new file mode 100644 index 00000000..d6094041 --- /dev/null +++ b/container_storage_provider/hpe_alletra_9000/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/container_storage_provider/hpe_alletra_storage_mp/index.html b/container_storage_provider/hpe_alletra_storage_mp/index.html new file mode 100644 index 00000000..496f1abf --- /dev/null +++ b/container_storage_provider/hpe_alletra_storage_mp/index.html @@ -0,0 +1,957 @@ + + + + + + + + + + + + + + + + + + HPE Alletra Storage MP and Alletra 9000/Primera/3PAR - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • CONTAINER STORAGE PROVIDERS »
  • +
  • HPE Alletra Storage MP and Alletra 9000/Primera/3PAR
  • +
  • +
  • +
+
+
+
+
+ +

Introduction

+

The HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) for Kubernetes is part of the HPE CSI Driver for Kubernetes. The CSP abstract the data management capabilities of the array for use by Kubernetes.

+
+

Note

+

The HPE CSI Driver for Kubernetes is only compatible with HPE Alletra Storage MP running with block services, such as HPE GreenLake for Block Storage.

+
+ +
+

Note

+

For help getting started with deploying the HPE CSI Driver using HPE Alletra Storage MP, Alletra 9000, Primera or 3PAR storage, check out the tutorial over at HPE Developer.

+
+

Platform Requirements

+

Check the corresponding CSI driver version in the compatibility and support table for the latest updates on supported Kubernetes version, orchestrators and host OS.

+ + +

Network Port Requirements

+

The HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Container Storage Provider requires the following TCP ports to be open inbound to the array from the Kubernetes cluster worker nodes running the HPE CSI Driver for Kubernetes.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
PortProtocolDescription
443HTTPSWSAPI (HPE Alletra Storage MP, Alletra 9000/Primera)
8080HTTPSWSAPI (HPE 3PAR)
22SSHArray communication
+

User Role Requirements

+

The CSP requires access to a local user with either edit or the super role. It's recommended to use the edit role for security best practices.

+
+

Note

+

LDAP users are not supported by the CSP.

+
+

Virtual Domains

+

Virtual Domains are not yet fully supported by the CSP. From HPE CSI Driver v2.5.0, it's possible to manually create the Kubernetes hosts connecting to storage within the Virtual Domain. Once the hosts have been created, deploy the CSI driver with the Helm chart using the "disableHostDeletion" parameter set to "true". The Virtual Domain user may create the hosts through the Virtual Domain if the "AllowDomainUsersAffectNoDomain" parameter is set to either "hostonly" or "yes" on the array.

+
+

Note

+

Remote Copy Groups managed by the CSP have not been tested with Virtual Domains at this time.

+
+

VLUN Templates

+

A VLUN template enables the export of a virtual volume as a VLUN to hosts. For more information, see the HPE Primera OS Commmand Line Interface - Installation and Reference Guide.

+

The CSP supports the following types of VLUN templates:

+ + + + + + + + + + + + + + + + + +
TemplateDescription
Matched setThe default VLUN template. The VLUN is visible to initiators with the host's WWNs only on the specified port(s).
Host seesThe VLUN is visible to the initiators with any of the host's WWNs.
+

The boolean string "hostSeesVLUN" StorageClass parameter controls which VLUN template to use.

+
+

Recommendation

+

In most scenarios, "hostSeesVLUN" should be set to "true".

+
+

Change VLUN Template for existing PVCs

+

To modify an existing PVC, "hostSeesVLUN" needs to be specified with the "allowMutations" parameter along with adding the PVC annotation "csi.hpe.com/hostSeesVLUN" with the string values of either "true" or "false". The HPE CSI Driver creates the VLUN template based upon the hostSeesVLUN parameter during the volume publish operation. For the change to take effect, the Pod will need to be scheduled on another node by either deleting the Pod or draining the node.

+

StorageClass Parameters

+

All parameters enumerated reflects the current version and may contain unannounced features and capabilities.

+

Common Provisioning Parameters

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Parameter  Option  Description
accessProtocol (Required)fc or iscsiThe access protocol to use when attaching the persistent volume.
cpg 1TextThe name of existing CPG to be used for volume provisioning. If the cpg parameter is not specified, the CSP will select a CPG available to the array.
snapCpg 1TextThe name of the snapshot CPG to be used for volume provisioning. Needs to be set if any kind of VolumeSnapshots or PVC cloning parameters are used.
compression 1BooleanIndicates that the volume should be compressed. (3PAR only)
provisioningType 1tpvvDefault. Indicates Thin provisioned volume type.
full 3Indicates Full provisioned volume type.
dedup 3Indicates Thin Deduplication volume type.
reduce 4Indicates Data Reduction volume type.
hostSeesVLUNBooleanEnable "host sees" VLUN template.
importVolumeNameTextName of the volume to import.
importVolAsCloneTextName of the volume to clone and import.
cloneOf 2TextName of the PersistentVolumeClaim to clone.
virtualCopyOf 2TextName of the PersistentVolumeClaim to snapshot.
qosNameTextName of the volume set which has QoS rules applied.
remoteCopyGroup 1TextName of a new or existing Remote Copy group on the array.
replicationDevicesTextIndicates name of custom resource of type hpereplicationdeviceinfos.
allowBatchReplicatedVolumeCreationBooleanEnable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group.
During this process, the Remote Copy group is stopped and started once.
oneRcgPerPvcBooleanCreates a dedicated Remote Copy group per persistent volume.
iscsiPortalIpsTextComma separated list of the array iSCSI port IPs.
fcPortsListTextComma separated list of available FC ports. Example: "0:5:1,1:4:2,2:4:1,3:4:2" Default: Use all available ports.
+

+ Restrictions applicable when using the CSI volume mutator: +
1 = Parameters that are editable after provisioning. +
2 = Volumes with snapshots/clones can't be modified. +
3 = HPE 3PAR only parameter +
4 = HPE Primera/Alletra 9000 only parameter +

+

Please see using the HPE CSI Driver for additional StorageClass examples like CSI snapshots and clones.

+
+

Important

+

The HPE CSI Driver allows the PersistentVolumeClaim to override the StorageClass parameters by annotating the PersistentVolumeClaim. Please see Using PVC Overrides for more details.

+
+

Cloning Parameters

+

Cloning supports two modes of cloning. Either use cloneOf and reference a PersistentVolumeClaim in the current namespace to clone or use importVolAsClone and reference an array volume name to clone and import into the Kubernetes cluster. Volumes with clones are immutable once created.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionDescription
cloneOfTextThe name of the PersistentVolumeClaim to be cloned. cloneOf and importVolAsClone are mutually exclusive.
importVolAsCloneTextThe name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive.
accessProtocolfc or iscsiThe access protocol to use when attaching the cloned volume.
+
+

Important

+

No other parameters are required in the StorageClass while cloning outside of those parameters listed in the table above.
+• Cloning using above parameters is independent of snapshot CRD availability on Kubernetes and it can be performed on any supported Kubernetes version.
+• Support for importVolAsClone and cloneOf is available from HPE CSI Driver 1.3.0+.

+
+

Array Snapshot Parameters

+

During the snapshotting process, any existing PersistentVolumeClaim defined in the virtualCopyOf parameter within a StorageClass, will be snapped as PersistentVolumeClaim and exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Volumes with snapshots are immutable once created.

+ + + + + + + + + + + + + + + + + + + + +
ParameterOptionDescription
accessProtocolfc or iscsiThe access protocol to use when attaching the snapshot volume.
virtualCopyOfTextThe name of existing PersistentVolumeClaim to be snapped
+
+

Important

+

No other parameters are required in the StorageClass when snapshotting a volume outside of those parameters listed in the table above.
+• Snapshotting using virtualCopyOf is independent of snapshot CRD availability on Kubernetes and it can be performed on any supported Kubernetes version.
+• Support for virtualCopyOf is available from HPE CSI Driver 1.3.0+.

+
+

Import Parameters

+

During the import volume process, any legacy (non-container volumes) defined in the ImportVol parameter, within a StorageClass, will be renamed to match the PersistentVolumeClaim that leverages the StorageClass. The new volumes will be exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Note: All previous Access Control Records and Initiator Groups will be removed from the volume when it is imported.

+ + + + + + + + + + + + + + + + + + + + +
ParameterOptionDescription
accessProtocolfc or iscsiThe access protocol to use when importing the volume.
importVolumeNameTextThe name of the array volume to import.
+
+

Important

+

No other parameters are required in the StorageClass when importing a volume outside of those parameters listed in the table above.
+• Support for importVolumeName is available from HPE CSI Driver 1.2.0+.

+
+

Remote Copy with Peer Persistence Synchronous Replication Parameters

+

To enable replication within the HPE CSI Driver, the following steps must be completed:

+ +

For a tutorial on how to enable replication, check out the blog Enabling Remote Copy using the HPE CSI Driver for Kubernetes on HPE Primera

+

A Custom Resource Definition (CRD) of type hpereplicationdeviceinfos.storage.hpe.com must be created to define the target array information. The CRD object name will be used to define the StorageClass parameter replicationDevices. CRD mandatory parameters: targetCpg, targetName, targetSecret and targetSecretNamespace.

+
apiVersion: storage.hpe.com/v2
+kind: HPEReplicationDeviceInfo
+metadata:
+  name: r1
+spec:
+  target_array_details:
+  - targetCpg: <cpg_name>
+    targetSnapCpg: <snapcpg_name> #optional.
+    targetName: <target_array_name>
+    targetSecret: <target_secret_name>
+    targetSecretNamespace: hpe-storage
+
apiVersion: storage.hpe.com/v1
+kind: HPEReplicationDeviceInfo
+metadata:
+  name: r1
+spec:
+  target_array_details:
+  - targetCpg: <cpg_name>
+    targetSnapCpg: <snapcpg_name> #optional.
+    targetName: <target_array_name>
+    targetSecret: <target_secret_name>
+    targetSecretNamespace: hpe-storage
+
+
+

Important

+

The HPE CSI Driver only supports Remote Copy Peer Persistence mode.

+
+

These parameters are applicable only for replication. Both parameters are mandatory. If the Remote Copy volume group (RCG) name, as defined within the StorageClass, does not exist on the array, then a new RCG will be created.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionDescription
remoteCopyGroupTextName of new or existing Remote Copy group 1 on the array.
replicationDevicesTextIndicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD).
allowBatchReplicatedVolumeCreationBooleanEnable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group. (Optional)
During this process, the Remote Copy group is stopped and started once.
oneRcgPerPvcBooleanCreates a dedicated Remote Copy group per persistent volume. (Optional)
+

+ Remote Copy additional details: +
1 = Existing RCG must have CPG and Copy CPG configured. +
Link to HPE Primera OS: Configuring data replication using Remote Copy +

+
+

Important

+

Remote Copy groups (RCG) created by the HPE CSI driver 2.1 and later have the Auto synchronize and Auto recover policies applied.
To add or remove these policies from RCGs, modify the existing RCG using the SSMC or CLI with the following command:

Add
setrcopygroup pol auto_recover,auto_synchronize <group_name>
Remove
setrcopygroup pol no_auto_recover,no_auto_synchronize <group_name>

+
+

Add Non-Replicated Volume to Remote Copy group

+

To add a non-replicated volume to an existing Remote Copy group, allowMutations: description at minimum must be defined within the StorageClass. Refer to Remote Copy with Peer Persistence Replication for more details.

+

Edit the non-replicated PVC and annotate the following parameters:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionDescription
remoteCopyGroupTextName of existing Remote Copy group.
oneRcgPerPvcBooleanCreates a dedicated Remote Copy group per persistent volume. (Optional)
replicationDevicesTextIndicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD).
+
+

Note

+

remoteCopyGroup and oneRcgPerPvc parameters are mutually exclusive and cannot be added together when editing a PVC.

+
+

VolumeSnapshotClass Parameters

+

These parameters are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable.

+

How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots.

+ + + + + + + + + + + + + + + +
ParameterStringDescription
read_onlyBooleanIndicates if the snapshot is writable on the array.
+

VolumeGroupClass Parameters

+

In the HPE CSI Driver version 1.4.0+, a volume set with QoS settings can be created dynamically using the QoS parameters for the VolumeGroupClass. The following parameters are available for a VolumeGroup on the array. Learn more about VolumeGroups in the provisioning concepts documentation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
descriptionTextAn identifier to describe the VolumeGroupClass. Example: "My VolumeGroupClass"
priorityTextThe priority level for the target volume set. Example: "low", "normal", "high"
ioMinGoalTextIOPS minimum goal for the target volume set. Example: "300"
ioMaxLimitTextIOPS maximum limit for the target volume set. Example: "10000"
bwMinGoalKbTextBandwidth minimum goal in kilobytes per second for the target volume set. Example: "300"
bwMaxLimitKbTextBandwidth maximum limit in kilobytes per second for the target volume set. Example: "30000"
latencyGoalTextLatency goal in milliseconds (ms) or microseconds(us) for the target volume set. Example: "300ms" or "500us"
domainTextThe array Virtual Domain, with which the volume group and related objects are associated with. Example: "sample_domain"
+
+

Important

+

All QoS parameters are mandatory when creating a VolumeGroupClass on the array.

+
+

Example:

+

apiVersion: storage.hpe.com/v1
+kind: VolumeGroupClass
+metadata:
+  name: my-volume-group-class
+provisioner: csi.hpe.com
+deletionPolicy: Delete
+parameters:
+  description: "HPE CSI Driver for Kubernetes Volume Group"
+  csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend
+  csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage
+  priority: normal
+  ioMinGoal: "300"
+  ioMaxLimit: "10000"
+  bwMinGoalKb: "3000"
+  bwMaxLimitKb: "30000"
+  latencyGoal: "300ms"
+

+

SnapshotGroupClass Parameters

+

These parameters are for SnapshotGroupClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable.

+

How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots.

+ + + + + + + + + + + + + + + +
ParameterStringDescription
read_onlyBooleanIndicates if the snapshot is writable on the array.
+

Static Provisioning

+

Static provisioning of PVs and PVCs may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass.

+

Prerequisites

+

The CSP expects a certain naming convention for PersistentVolumes and Virtual Volumes on the array.

+
    +
  • Persistent Volume: pvc-00000000-0000-0000-0000-000000000000
  • +
  • Virtual Volume: pvc-00000000-0000-0000-0000-000
  • +
+
+

Note

+

The zeroes are used as examples. They can be replaced with any hexadecimal from 0 to f. Establishing a scheme may be important if static provisioning is going to be the main method of providing persistent storage to workloads.

+
+

The following example uses the above scheme as a naming convention. Have a storage administrator rename the existing Virtual Volume on the array:

+

setvv -name pvc-00000000-0000-0000-0000-000 my-existing-virtual-volume
+

+

HPEVolumeInfo

+

Create a new HPEVolumeInfo resource.

+

apiVersion: storage.hpe.com/v2
+kind: HPEVolumeInfo
+metadata:
+  name: pvc-00000000-0000-0000-0000-000000000000
+spec:
+  record:
+    Id: pvc-00000000-0000-0000-0000-000000000000
+    Name: pvc-00000000-0000-0000-0000-000
+  uuid: pvc-00000000-0000-0000-0000-000000000000
+

+

Persistent Volume

+

Create a PV referencing the HPEVolumeInfo resource.

+
+

Warning

+

If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem.

+
+

apiVersion: v1
+kind: PersistentVolume
+metadata:
+  name: pvc-00000000-0000-0000-0000-000000000000
+spec:
+  accessModes:
+  - ReadWriteOnce
+  capacity:
+    storage: 16Gi
+  csi:
+    volumeHandle: pvc-00000000-0000-0000-0000-000000000000
+    driver: csi.hpe.com
+    fsType: xfs
+    volumeAttributes:
+      volumeAccessMode: mount
+      fsType: xfs
+    controllerPublishSecretRef:
+      name: hpe-backend
+      namespace: hpe-storage
+    nodePublishSecretRef:
+      name: hpe-backend
+      namespace: hpe-storage
+    controllerExpandSecretRef:
+      name: hpe-backend
+      namespace: hpe-storage
+  persistentVolumeReclaimPolicy: Retain
+  volumeMode: Filesystem
+

+
+

Tip

+

Remove .spec.csi.controllerExpandSecretRef to disallow volume expansion.

+
+

Persistent Volume Claim

+

Now, a user may claim the static PV by creating a PVC referencing the PV name in .spec.volumeName.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-pvc
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 16Gi
+  volumeName: my-static-pv-1
+  storageClassName: ""
+

+

Support

+

Please refer to the HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage CSP support statement.

+ +
+
+ + + +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/container_storage_provider/hpe_cloud_volumes/index.html b/container_storage_provider/hpe_cloud_volumes/index.html new file mode 100644 index 00000000..b42bce4a --- /dev/null +++ b/container_storage_provider/hpe_cloud_volumes/index.html @@ -0,0 +1,559 @@ + + + + + + + + + + + + + + + + + + Index - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Index
  • +
  • +
  • +
+
+
+
+
+ +
+

Expired content

+

The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.

+
+

Introduction

+

The HPE Cloud Volumes CSP integrates seamlessly with the HPE Cloud Volumes Block service in the public cloud. The CSP abstracts the data management capabilities of the storage service for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass and VolumeSnapshotClass parameters but also contains important HPE Cloud Volumes Block configuration details.

+
+

Important

+

The HPE Cloud Volumes CSP is currently in beta and available as a Tech Preview on Amazon EKS only. Please see the 1.5.0-beta Helm chart.

+
+ +
+

Seealso

+

There's a Tech Preview available in the Video Gallery on how to get started with the HPE Cloud Volumes CSP with the HPE CSI Driver.

+
+

Cloud requirements

+

Always check the corresponding CSI driver version in compatibility and support for basic requirements (such as supported Kubernetes version and cloud instance OS). If a certain feature is gated against any particular cloud provider it will be called out where applicable.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
HyperscalerManaged KubernetesBYO KubernetesStatus
Amazon Web ServicesElastic Kubernetes Service (EKS)N/ATech Preview
Microsoft AzureAzure Kubernetes Service (AKS)TBATBA
Google CloudGoogle Kubernetes Engine (GKE)TBATBA
+

Additional hyperscaler support and BYO capabilities may become available in a future release of the CSP.

+

Instance metadata

+

Kubernetes compute nodes will need to have access to the cloud provider's metadata services. This varies by cloud provider and is taken care of automatically by the HPE Cloud Volume CSP. The provided values may be overridden in the StorageClass, see common parameters for more information.

+

Available regions

+

The HPE Cloud Volumes CSP may be deployed in the regions where the managed Kubernetes service control planes intersect with the HPE Cloud Volumes Block service.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
RegionEKSAzureGoogle
Americasus-east-1, us-west-2TBATBA
Europeeu-west-1, eu-west-2TBATBA
Asia Pacificap-northest-1TBATBA
+

Consider this table a snapshot of a particular moment in time and consult with the respective hyperscalers and the HPE Cloud Volumes Block service for definitive availability.

+
+

Note

+

In other regions where HPE Cloud Volumes provide services, such as us-west-1, but cloud providers has no managed Kubernetes service; BYO Kubernetes is the only available option when it becomes available as a supported feature of the CSP.

+
+

Limitations

+

Consult the compatibility and support table for generic limitations and requirements. CSI and CSP specific limitations with HPE Cloud Volumes Block is listed below.

+
    +
  • The Volume Group Provisioner and Volume Group Snapshotter sidecars are currently not implemented in the HPE Cloud Volumes CSP.
  • +
  • The base CSI driver parameter description is ignored by the CSP.
  • +
  • In some cases, your a "regionID" needs to be supplied in the StorageClass and in conjunction with Ephemeral Inline Volumes. Your "regionID" may only be found in the APIs. Join us on Slack if you're hitting this issue (it can be seen in the CSP logs).
  • +
+
+

Tip

+

While not a limitation, iSCSI CHAP is mandatory with HPE Cloud Volumes but does not need to be configured within the CSI driver. The CHAP credentials are queried through the REST APIs from the HPE Cloud Volumes account session and applied automatically during runtime.

+
+

StorageClass parameters

+

A StorageClass is used to provision or clone an HPE Cloud Volumes Block-backed persistent volume. It can also be used to import an existing Cloud Volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.

+ +

Please see using the HPE CSI Driver for base StorageClass examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities.

+
+

Note

+

All parameters are optional unless documented as mandatory for a particular use case.

+
+

Common parameters for provisioning and cloning

+

These parameters are mutable between a parent volume and creating a clone from a snapshot.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
destroyOnDeleteBooleanIndicates the backing Cloud Volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to "false" which means volumes needs to be pruned manually in the Cloud Volume service.
limitIopsIntegerThe IOPS limit of the volume. The IOPS limit should be in the range 300 (default) to 20000.
performancePolicy1TextThe name of the performance policy to assign to the volume. Available performance policies: "Exchange", "Oracle", "SharePoint", "SQL", "Windows File Server". Defaults to "Other Workloads".
scheduleTextSnapshot schedule to assign to the volumes. Available schedules: "hourly", "daily", "twicedaily", "weekly", "monthly", "none". Defaults to "daily".
retentionPolicyIntegerRetention policy to assign to the schedule. The parameter must be paired properly with the schedule.

  • hourly: 6, 12, 24
  • daily: 3, 7, 14
  • twicedaily: 4, 8, 14
  • weekly: 2, 4, 8
  • monthly: 3, 6, 12
Defaults to "3" paired with the "daily" retentionPolicy.
privateCloud1TextOverride the compute instance provided VPC/VNET.
existingCloudSubnet1TextOverride the compute instance provided subnet.
automatedConnection1BooleanOverride the HPE Cloud Volumes configured setting for connection automation. Connections between HPE Cloud Volumes and the desired VPC/VNET needs to be provisioned manually if set to "false".
+

+ Restrictions applicable when using the CSI volume mutator: +
1 = Parameter is immutable and can't be altered after provisioning/cloning. +

+

Provisioning parameters

+

These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes.

+ + + + + + + + + + + + + + + +
ParameterStringDescription
volumeTypeTextVolume type, General Purpose Flash ("GPF") or Premium Flash ("PF"). Defaults to "PF"
+

Pod inline volume parameters (Local Ephemeral Volumes)

+

These parameters are applicable only for Pod inline volumes and to be specified within Pod spec.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
csi.storage.k8s.io/ephemeralBooleanIndicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to "true".
inline-volume-secret-nameTextA reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call.
inline-volume-secret-namespaceTextThe namespace of inline-volume-secret-name for ephemeral inline volume.
sizeTextThe size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used.
+
+

Important

+

All parameters are required for inline ephemeral volumes.

+
+

Cloning parameters

+

Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Cloud Volume name to clone and import to Kubernetes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
cloneOfTextThe name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive.
importVolAsCloneTextThe name of the Cloud Volume to clone and import. importVolAsClone and cloneOf are mutually exclusive.
snapshotTextThe name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created.
createSnapshotBooleanIndicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created.
replStoreTextName of the Cloud Volume Replication Store to look for volumes, defaults to look outside of Replication Stores
+

Import parameters

+

Importing volumes to Kubernetes requires the source Cloud Volume to be disconnected.

+ + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
importVolumeNameTextThe name of the Cloud Volume to import.
forceImportBooleanAllows import of volumes created on a different Kubernetes cluster other than the one importing the volume to.
+

VolumeSnapshotClass parameters

+

These parametes are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information.

+

How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots.

+ + + + + + + + + + + + + + + +
ParameterStringDescription
descriptionTextText to be added to the snapshot's description in the Cloud Volume service (optional)
+ +
+
+ + + +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/container_storage_provider/hpe_nimble_storage/index.html b/container_storage_provider/hpe_nimble_storage/index.html new file mode 100644 index 00000000..e31260cd --- /dev/null +++ b/container_storage_provider/hpe_nimble_storage/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/container_storage_provider/index.html b/container_storage_provider/index.html new file mode 100644 index 00000000..c4e488d4 --- /dev/null +++ b/container_storage_provider/index.html @@ -0,0 +1,238 @@ + + + + + + + + + + + + + + + + + + Container Storage Providers - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Container Storage Providers
  • +
  • +
  • +
+
+
+ + + + +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/csi_driver/archive.html b/csi_driver/archive.html new file mode 100644 index 00000000..430b6b92 --- /dev/null +++ b/csi_driver/archive.html @@ -0,0 +1,747 @@ + + + + + + + + + + + + + + + + + + Unsupported Releases - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Unsupported Releases
  • +
  • +
  • +
+
+
+
+
+ +

Unsupported Releases

+

HPE supports up to three minor releases. These release are kept here for historic purposes.

+ +

HPE CSI Driver for Kubernetes 2.3.0

+

Release highlights:

+
    +
  • Introducing HPE Alletra 5000
  • +
  • Security updates
  • +
  • Support for Kubernetes 1.25-1.26 and Red Hat OpenShift 4.11-4.12
  • +
  • Support for SLES 15 SP4, RHEL 9.1 and Ubuntu 22.04
  • +
+

Upgrade considerations:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.23-1.261
Helm Chartv2.3.0 on ArtifactHub
Operators + v2.3.0 on OperatorHub
+ v2.3.0 via OpenShift console +
Worker OS + RHEL2 7.x, 8.x, 9.x, RHCOS 4.10-4.12
+ Ubuntu 16.04, 18.04, 20.04, 22.04
+ SLES 15 SP2, SP3, SP4 +
Platforms3 + Alletra OS 5000/6000 6.0.0.x - 6.1.1.x
+ Alletra OS 9000 9.3.x - 9.5.x
+ Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.1.x
+ Primera OS 4.3.x - 4.5.x
+ 3PAR OS 3.3.x +
Data protocolFibre Channel, iSCSI
Release notesv2.3.0 on GitHub
Blogs + Support and security updates for HPE CSI Driver for Kubernetes (release blog) +
+ +

+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. + 3 = Learn about each data platform's team support commitment. +

+

HPE CSI Driver for Kubernetes 2.2.0

+

Release highlights:

+
    +
  • Support for Kubernetes 1.24 and Red Hat OpenShift 4.10
  • +
  • Added Tolerations, Affinity, Labels and Node Selectors to Helm chart
  • +
  • Improved automatic recovery for the NFS Server Provisioner
  • +
  • Added multipath handling for Alletra 9000, Primera and 3PAR
  • +
  • Volume expansion of encrypted volumes
  • +
+

Upgrade considerations:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.21-1.241
Helm Chartv2.2.0 on ArtifactHub
Operators + v2.2.1 on OperatorHub
+ v2.2.1 via OpenShift console +
Worker OS + RHEL2 7.x & 8.x, RHCOS 4.8 & 4.10
+ Ubuntu 16.04, 18.04 & 20.04
+ SLES 15 SP2 +
Platforms + Alletra OS 6000 6.0.0.x - 6.1.0.x
+ Alletra OS 9000 9.3.x - 9.5.x
+ Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.0.x
+ Primera OS 4.3.x - 4.5.x
+ 3PAR OS 3.3.x +
Data protocolFibre Channel, iSCSI
Release notesv2.2.0 on GitHub
Blogs + Updates and Improvements to HPE CSI Driver for Kubernetes (release blog) +
+ +

+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. +

+

HPE CSI Driver for Kubernetes 2.1.1

+

Release highlights:

+
    +
  • Support for Kubernetes 1.23
  • +
  • Upstream CSI sidecar updates
  • +
  • Improved LUN discoverability in certain environments
  • +
+ + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.20-1.231
Worker OSCentOS and RHEL 7.x & 8.x, RHCOS 4.6 & 4.8, Ubuntu 18.04 & 20.04, SLES 15 SP2 +
Data protocolFibre Channel, iSCSI
Platforms + Alletra OS 6000 6.0.0.x
+ Alletra OS 9000 9.4.x
+ Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x
+ Primera OS 4.3.x, 4.4.x
+ 3PAR OS 3.3.2 +
Release notesv2.1.1 on GitHub
+ +

+ 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. +

+

HPE CSI Driver for Kubernetes 2.1.0

+

Release highlights:

+
    +
  • Prometheus exporters
  • +
  • Support for Red Hat OCP 4.8
  • +
  • Support for Kubernetes 1.22
  • +
  • Reliability/Stability enhancements
      +
    • Peer Persistence Remote Copy enhancements
    • +
    • Volume Mutator enhancements
    • +
    • Logging enhancements
    • +
    +
  • +
+ + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.20-1.221
Worker OSCentOS and RHEL 7.x & 8.x, RHCOS 4.6 & 4.8, Ubuntu 18.04 & 20.04, SLES 15 SP2 +
Data protocolFibre Channel, iSCSI
Platforms + Alletra OS 6000 6.0.0.x
+ Alletra OS 9000 9.3.x, 9.4.x
+ Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x
+ Primera OS 4.0.x, 4.1.x, 4.2.x, 4.3.x, 4.4.x
+ 3PAR OS 3.3.1, 3.3.2 +
Release notesv2.1.0 on GitHub
Blogs + HPE CSI Driver enhancements with monitoring and alerting (release blog)
+ Get started with Prometheus and Grafana and HPE Storage Array Exporter (tutorial) +
+ +

+ 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. +

+

HPE CSI Driver for Kubernetes 2.0.0

+

Release highlights:

+
    +
  • Support for HPE Alletra 5000/6000 and 9000
  • +
  • Host-based volume encryption
  • +
  • Multitenancy for HPE Alletra 5000/6000 and Nimble Storage
  • +
+ + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.18-1.211
Worker OSCentOS and RHEL 7.x & 8.x, RHCOS 4.6, Ubuntu 18.04 & 20.04, SLES 15 SP2 +
Data protocolFibre Channel, iSCSI
Platforms + Alletra OS 6000 6.0.0.x
+ Alletra OS 9000 9.3.0
+ Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x
+ Primera OS 4.0.x, 4.1.x, 4.2.x, 4.3.x
+ 3PAR OS 3.3.1, 3.3.2 +
Release notesv2.0.0 on GitHub
Blogs + HPE CSI Driver for Kubernetes now available for HPE Alletra (release blog)
+ Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble (tutorial)
+ Host-based Volume Encryption with HPE CSI Driver for Kubernetes (tutorial) +
+ +

+ 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. +

+

HPE CSI Driver for Kubernetes 1.4.0

+

Release highlights:

+
    +
  • Kubernetes CSI Sidecars: Volume Group Provisioner and Volume Group Snapshotter
  • +
  • NFS Server Provisioner GA
  • +
  • HPE Primera Remote Copy Peer Persistence support
  • +
  • Air-gap support for the Helm chart
  • +
+ + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.17-1.201
Worker OSCentOS and RHEL 7.7 & 8.1, RHCOS 4.4 & 4.6, Ubuntu 18.04 & 20.04, SLES 15 SP1 +
Data protocolFibre Channel, iSCSI
Platforms + NimbleOS 5.0.10.0-x, 5.1.4.200-x, 5.2.1.0-x, 5.3.0.0-x, 5.3.1.0-x
+ 3PAR OS 3.3.1+
+ Primera OS 4.0+
+
Release notesv1.4.0 on GitHub
Blogs + HPE CSI Driver for Kubernetes v1.4.0 now available! (release blog)
+ Synchronized Volume Snapshots for Distributed Workloads on Kubernetes (tutorial) +
+ +

+ 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. +

+

HPE CSI Driver/Operator for Kubernetes 1.3.0

+

Release highlights:

+
    +
  • Kubernetes CSI Sidecar: Volume Mutator
  • +
  • Broader ecosystem support
  • +
  • Native iSCSI CHAP configuration
  • +
+ + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.15-1.181
Worker OSCentOS 7.6, RHEL 7.6, RHCOS 4.3-4.4, Ubuntu 18.04, Ubuntu 20.04 +
Data protocolFibre Channel, iSCSI
Platforms + NimbleOS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x
+ 3PAR OS 3.3.1
+ Primera OS 4.0.0, 4.1.0, 4.2.02
+
Release notesv1.3.0 on GitHub
Blogs + Around The Storage Block (release)
+ HPE DEV (Remote copy peer persistence tutorial)
+ HPE DEV (Introducing the volume mutator)
+
+ +

+ 1 = For HPE Ezmeral Container Platform and Rancher; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations.
+ 2 = Only FC is supported on Primera OS prior to 4.2.0. +

+

HPE CSI Driver for Kubernetes 1.2.0

+

Release highlights: Support for raw block volumes and inline ephemeral volumes. NFS Server Provisioner in Tech Preview (beta).

+ + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.14-1.18
Worker OSCentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 +
Data protocolFibre Channel, iSCSI
Platforms + NimbleOS 5.0.10.x, 5.1.3.1000-x, 5.1.4.200-x, 5.2.1.x
+ 3PAR OS 3.3.1
+ Primera OS 4.0.0, 4.1.0 (FC only)
+
Release notesv1.2.0 on GitHub
BlogsAround The Storage Block (release)
+ HPE DEV (tutorial for raw block and inline volumes)
+ Around The Storage Block (NFS Server Provisioner)
+ HPE DEV (tutorial for NFS) +
+ +

HPE CSI Driver for Kubernetes 1.1.1

+

Release highlights: Support for HPE 3PAR and Primera Container Storage Provider.

+ + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.13-1.17
Worker OSCentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 +
Data protocolFibre Channel, iSCSI
Platforms + NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x
+ 3PAR OS 3.3.1
+ Primera OS 4.0.0, 4.1.0 (FC only)
+
Release notesN/A
BlogsHPE Storage Tech Insiders (release), HPE DEV (tutorial for "primera3par" CSP)
+ +

HPE CSI Driver for Kubernetes 1.1.0

+

Release highlights: Broader ecosystem support, official support for CSI snapshots and volume resize.

+ + + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.13-1.17
Worker OSCentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 +
Data protocolFibre Channel, iSCSI
Platforms + NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x +
Release notesv1.1.0 on GitHub
BlogsHPE Storage Tech Insiders (release), HPE DEV (snapshots, clones, resize)
+ +

HPE CSI Driver for Kubernetes 1.0.0

+

Release highlights: Initial GA release with support for Dynamic Provisioning.

+ + + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.13-1.17
Worker OSCentOS 7.6, RHEL 7.6, Ubuntu 16.04, Ubuntu 18.04 +
Data protocolFibre Channel, iSCSI
PlatformsNimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x
Release notesv1.0.0 on GitHub
BlogsHPE Storage Tech Insiders (release), HPE DEV (architecture and introduction)
+ +
+
+ + + +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/csi_driver/deployment.html b/csi_driver/deployment.html new file mode 100644 index 00000000..404f3965 --- /dev/null +++ b/csi_driver/deployment.html @@ -0,0 +1,926 @@ + + + + + + + + + + + + + + + + + + Deployment - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • HPE CSI DRIVER FOR KUBERNETES »
  • +
  • Deployment
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

The HPE CSI Driver is deployed by using industry standard means, either a Helm chart or an Operator. An "advanced install" from object configuration files is provided as reference for partners, OEMs and users wanting to perform customizations and their own packaging or deployment methodologies.

+ +

Delivery Vehicles

+

As different methods of installation are provided, it might not be too obvious which delivery vehicle is the right one.

+

+

Need Help Deciding?

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
I have a...Then you need...
Vanilla upstream Kubernetes cluster on a supported host OS.The Helm chart
Red Hat OpenShift 4.x cluster.The certified CSI operator for OpenShift
Supported environment with multiple backends.Helm chart with additional Secrets and StorageClasses
HPE Ezmeral Runtime Enterprise environment.The Helm chart
Operator Life-cycle Manager (OLM) environment.The CSI operator
Unsupported host OS/Kubernetes cluster and like to tinker.The advanced install
Supported platform in an air-gapped environmentThe Helm chart using the air-gapped procedure
+
+

Undecided?

+

If it's not clear what you should use for your environment, the Helm chart is most likely the correct answer.

+
+

Helm

+

Helm is the package manager for Kubernetes. Software is being delivered in a format designated as a "chart". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG file.

+

The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on Artifact Hub. The chart only supports Helm 3 from version 1.3.0 of the HPE CSI Driver. In an effort to avoid duplicate documentation, please see the chart for instructions on how to deploy the CSI driver using Helm.

+ +

Helm for Air-gapped Environments

+

In the event of deploying the HPE CSI Driver in a secure air-gapped environment, Helm is the recommended method. For sake of completeness, it's also possible to follow the advanced install procedures and replace "quay.io" in the deployment manifests with the internal private registry location.

+

Establish a working directory on a bastion Linux host that has HTTP access to the Internet, the private registry and the Kubernetes cluster where the CSI driver needs to be installed. The bastion host is assumed to have the docker, helm and curl command installed. It's also assumed throughout that the user executing docker has logged in to the private registry and that pulling images from the private registry is allowed anonymously by the Kubernetes compute nodes.

+
+

Note

+

Only the HPE CSI Driver 1.4.0 and later is supported using this methodology.

+
+

Create a working directory and set environment variables referenced throughout the procedure. In this example, we'll use HPE CSI Driver v2.5.0 on Kubernetes 1.30. Available versions are found in the co-deployments GitHub repo.

+

mkdir hpe-csi-driver
+cd hpe-csi-driver
+export MY_REGISTRY=registry.enterprise.example.com
+export MY_CSI_DRIVER=2.5.0
+export MY_K8S=1.30
+

+

Next, create a list with the CSI driver images. Copy and paste the entire text blob in one chunk.

+

curl -s https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/hpe-csi-k8s-${MY_K8S}.yaml \
+        https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/nimble-csp.yaml \
+        https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/3par-primera-csp.yaml \
+| grep image: | awk '{print $2}' | sort | uniq > images
+echo quay.io/hpestorage/nfs-provisioner:v3.0.5 >> images
+

+
+

Important

+

In HPE CSI Driver 2.4.2 and earlier the NFS Server Provisioner image is not automatically pulled from the private registry once installed. Use the "nfsProvisionerImage" parameter in the StorageClass.

+
+

The above command should not output anything. A list of images should be in the file "images".

+

Pull, tag and push the images to the private registry.

+

cat images | xargs -n 1 docker pull
+awk '{ print $1" "$1 }' images | sed -E -e "s/ quay.io| registry.k8s.io/ ${MY_REGISTRY}/" | xargs -n 2 docker tag
+sed -E -e "s/quay.io|registry.k8s.io/${MY_REGISTRY}/" images | xargs -n 1 docker push
+

+
+

Tip

+

Depending on what kind of private registry being used, the base repositories hpestorage and sig-storage might need to be created and given write access to the user pushing the images.

+
+

Next, install the chart as normal with the additional registry parameter. This is an example, please refer to the Helm chart documentation on ArtifactHub.

+

helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/
+kubectl create ns hpe-storage
+

+

Version 2.4.2 or earlier.

+

helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage --version ${MY_CSI_DRIVER} --set registry=${MY_REGISTRY}
+

+

Version 2.5.0 or newer, skip to → Version 2.5.0 and newer.

+
+

Note

+

If the client running helm is in the air-gapped environment as well, the docs directory needs to be hosted on a web server in the air-gapped environment, and then use helm repo add hpe-storage https://my-web-server.internal/docs above instead.

+
+

Version 2.5.0 and newer

+

In version 2.5.0 and onwards, all images used by the HPE CSI Driver for Kubernetes Helm Chart are parameterized individually with the fully qualified URL.

+

Use the procedure above to mirror the images to an internal registry. Once mirrored, replace the registry names in the reference values.yaml file.

+

curl -s https://raw.githubusercontent.com/hpe-storage/co-deployments/master/helm/values/csi-driver/v${MY_CSI_DRIVER}/values.yaml | sed -E -e "s/ quay.io| registry.k8s.io/ ${MY_REGISTRY}/g" > my-values.yaml
+

+

Use the my-values.yaml file to install the Helm Chart.

+

helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver \
+-n hpe-storage --version ${MY_CSI_DRIVER} \
+-f my-values.yaml
+

+

Operator

+

The Operator pattern is based on the idea that software should be instantiated and run with a set of custom controllers in Kubernetes. It creates a native experience for any software running on Kubernetes.

+

The official HPE CSI Operator for Kubernetes is hosted on OperatorHub.io. The CSI Operator images are hosted both on quay.io and officially certified containers in the Red Hat Ecosystem Catalog.

+

Red Hat OpenShift Container Platform

+

The HPE CSI Operator for Kubernetes is a fully certified Operator for OpenShift. There are a few tweaks needed and there's a separate section for OpenShift.

+ +

Upstream Kubernetes and Others

+

Follow the documentation from the respective upstream distributions on how to deploy an Operator. In most cases, the Operator Lifecyle Manager (OLM) needs to be installed separately (does NOT apply to OpenShift 4 and later).

+

Visit the documentation in the OLM GitHub repo to learn how to install OLM.

+

Once OLM is operational, install the HPE CSI Operator.

+

kubectl create -f https://operatorhub.io/install/hpe-csi-operator.yaml
+

+

The Operator will be installed in my-hpe-csi-operator namespace. Watch it come up by inspecting the ClusterServiceVersion (CSV).

+

kubectl get csv -n my-hpe-csi-operator
+

+

Next, a HPECSIDriver object needs to be instantiated. Create a file named hpe-csi-operator.yaml, edit and apply (or copy the command from the top of the content).

+
# kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+  name: hpecsidriver-sample
+spec:
+  # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+  controller:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    resources:
+      limits:
+        cpu: 2000m
+        memory: 1Gi
+      requests:
+        cpu: 100m
+        memory: 128Mi
+    tolerations: []
+  csp:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    resources:
+      limits:
+        cpu: 2000m
+        memory: 1Gi
+      requests:
+        cpu: 100m
+        memory: 128Mi
+    tolerations: []
+  disable:
+    alletra6000: false
+    alletra9000: false
+    alletraStorageMP: false
+    nimble: false
+    primera: false
+  disableHostDeletion: false
+  disableNodeConfiguration: false
+  disableNodeConformance: false
+  disableNodeGetVolumeStats: false
+  disableNodeMonitor: false
+  imagePullPolicy: IfNotPresent
+  images:
+    csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
+    csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0
+    csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7
+    csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0
+    csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
+    csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
+    csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
+    csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
+    csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6
+    csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6
+    csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6
+    nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5
+    nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0
+    primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0
+  iscsi:
+    chapSecretName: ""
+  kubeletRootDir: /var/lib/kubelet
+  logLevel: info
+  node:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    resources:
+      limits:
+        cpu: 2000m
+        memory: 1Gi
+      requests:
+        cpu: 100m
+        memory: 128Mi
+    tolerations: []
+
# kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+  name: hpecsidriver-sample
+spec:
+  # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+  controller:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  csp:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  disable:
+    alletra6000: false
+    alletra9000: false
+    alletraStorageMP: false
+    nimble: false
+    primera: false
+  disableNodeConfiguration: false
+  disableNodeConformance: false
+  disableNodeGetVolumeStats: false
+  imagePullPolicy: IfNotPresent
+  iscsi:
+    chapPassword: ""
+    chapUser: ""
+  kubeletRootDir: /var/lib/kubelet/
+  logLevel: info
+  node:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  registry: quay.io
+
+
+
# kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+  name: hpecsidriver-sample
+spec:
+  # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+  controller:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  csp:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  disable:
+    alletra6000: false
+    alletra9000: false
+    alletraStorageMP: false
+    nimble: false
+    primera: false
+  disableNodeConfiguration: false
+  disableNodeConformance: false
+  disableNodeGetVolumeStats: false
+  imagePullPolicy: IfNotPresent
+  iscsi:
+    chapPassword: ""
+    chapUser: ""
+  kubeletRootDir: /var/lib/kubelet/
+  logLevel: info
+  node:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  registry: quay.io
+
+
+
+
+

Tip

+

The contents depends on which version of the CSI driver is installed. Please visit OperatorHub or ArtifactHub for more details.

+
+

The CSI driver is now ready for use. Proceed to the next section to learn about adding an HPE storage backend.

+

Add an HPE Storage Backend

+

Once the CSI driver is deployed, two additional objects needs to be created to get started with dynamic provisioning of persistent storage, a Secret and a StorageClass.

+
+

Tip

+

Naming the Secret and StorageClass is entirely up to the user, however, to keep up with the examples on SCOD, it's highly recommended to use the names illustrated here.

+
+

Secret Parameters

+

All parameters are mandatory and described below.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescription
serviceNameThis hostname or IP address where the Container Storage Provider (CSP) is running, usually a Kubernetes Service, such as "alletra6000-csp-svc" or "alletra9000-csp-svc"
servicePortThis is port the serviceName is listening to.
backendThis is the management hostname or IP address of the actual backend storage system, such as an Alletra 5000/6000 or 9000 array.
usernameBackend storage system username with the correct privileges to perform storage management.
passwordBackend storage system password.
+

Example:

+
apiVersion: v1
+kind: Secret
+metadata:
+  name: hpe-backend
+  namespace: hpe-storage
+stringData:
+  serviceName: alletrastoragemp-csp-svc
+  servicePort: "8080"
+  backend: 10.10.0.20
+  username: 3paradm
+  password: 3pardata
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: hpe-backend
+  namespace: hpe-storage
+stringData:
+  serviceName: alletra6000-csp-svc
+  servicePort: "8080"
+  backend: 192.168.1.110
+  username: admin
+  password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: hpe-backend
+  namespace: hpe-storage
+stringData:
+  serviceName: alletra9000-csp-svc
+  servicePort: "8080"
+  backend: 10.10.0.20
+  username: 3paradm
+  password: 3pardata
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: hpe-backend
+  namespace: hpe-storage
+stringData:
+  serviceName: nimble-csp-svc
+  servicePort: "8080"
+  backend: 192.168.1.2
+  username: admin
+  password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: hpe-backend
+  namespace: hpe-storage
+stringData:
+  serviceName: primera3par-csp-svc
+  servicePort: "8080"
+  backend: 10.10.0.2
+  username: 3paradm
+  password: 3pardata
+
+

Create the Secret using kubectl:

+

kubectl create -f secret.yaml
+

+
+

Tip

+

In a real world scenario it's more practical to name the Secret something that makes sense for the organization. It could be the hostname of the backend or the role it carries, i.e "hpe-alletra-sanjose-prod".

+
+

Next step involves creating a default StorageClass.

+

Configuring Additional Storage Backends

+

It's not uncommon to have multiple HPE primary storage systems within the same environment, either the same family or different ones. This section walks through the scenario of managing multiple StorageClass and Secret API objects to represent an environment with multiple systems.

+

There's a brief tutorial available in the Video Gallery that walks through these steps.

+
+

Note

+

Make note of the Kubernetes Namespace or OpenShift project name used during the deployment. In the following examples, we will be using the "hpe-storage" Namespace.

+
+

To view the current Secrets in the "hpe-storage" Namespace (assuming default names):

+

kubectl -n hpe-storage get secret/hpe-backend
+NAME                     TYPE          DATA      AGE
+hpe-backend              Opaque        5         2m
+

+

This Secret is used by the CSI sidecars in the StorageClass to authenticate to a specific backend for CSI operations. In order to add a new Secret or manage access to multiple backends, additional Secrets will need to be created per backend.

+
+

Secret Requirements

+
    +
  • Each Secret name must be unique.
  • +
  • servicePort should be set to 8080.
  • +
+
+

To create a new Secret, specify the name, Namespace, backend username, backend password and the backend IP address to be used by the CSP and save it as custom-secret.yaml (a detailed description of the parameters are available above).

+
apiVersion: v1
+kind: Secret
+metadata:
+  name: custom-secret
+  namespace: hpe-storage
+stringData:
+  serviceName: alletrastoragemp-csp-svc
+  servicePort: "8080"
+  backend: 10.10.0.20
+  username: 3paradm
+  password: 3pardata
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: custom-secret
+  namespace: hpe-storage
+stringData:
+  serviceName: alletra6000-csp-svc
+  servicePort: "8080"
+  backend: 192.168.1.110
+  username: admin
+  password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: custom-secret
+  namespace: hpe-storage
+stringData:
+  serviceName: alletra9000-csp-svc
+  servicePort: "8080"
+  backend: 10.10.0.20
+  username: 3paradm
+  password: 3pardata
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: custom-secret
+  namespace: hpe-storage
+stringData:
+  serviceName: nimble-csp-svc
+  servicePort: "8080"
+  backend: 192.168.1.2
+  username: admin
+  password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: custom-secret
+  namespace: hpe-storage
+stringData:
+  serviceName: primera3par-csp-svc
+  servicePort: "8080"
+  backend: 10.10.0.2
+  username: 3paradm
+  password: 3pardata
+
+

Create the Secret using kubectl:

+

kubectl create -f custom-secret.yaml
+

+

You should now see the Secret in the "hpe-storage" Namespace:

+

kubectl -n hpe-storage get secret/custom-secret
+NAME                     TYPE          DATA      AGE
+custom-secret            Opaque        5         1m
+

+

Create a StorageClass with the Custom Secret

+

To use the new Secret "custom-secret", create a new StorageClass using the Secret and the necessary StorageClass parameters. Please see the requirements section of the respective CSP.

+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-custom
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/controller-expand-secret-name: custom-secret
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: custom-secret
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: custom-secret
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: custom-secret
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/provisioner-secret-name: custom-secret
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  description: "Volume created by using a custom Secret with the HPE CSI Driver for Kubernetes"
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-custom
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/resizer-secret-name: custom-secret
+  csi.storage.k8s.io/resizer-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: custom-secret
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: custom-secret
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: custom-secret
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/provisioner-secret-name: custom-secret
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  description: "Volume created by using a custom Secret with the HPE CSI Driver for Kubernetes"
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+
+
+

Note

+

Don't forget to call out the StorageClass explicitly when creating PVCs from non-default StorageClasses.

+
+

Next, Create a PersistentVolumeClaim from a StorageClass.

+

Advanced Install

+

This guide is primarily written to accommodate a highly manual installation on upstream Kubernetes or partner OEMs engaged with HPE to bundle the HPE CSI Driver in a custom distribution. Installation steps may vary for different vendors and flavors of Kubernetes.

+

The following example walks through deployment of the latest CSI driver.

+
+

Critical

+

It's highly recommended to use either the Helm chart or Operator to install the HPE CSI Driver for Kubernetes and the associated Container Storage Providers. Only venture down manual installation if your requirements can't be met by the Helm chart or Operator.

+
+

Manual CSI Driver Install

+

Deploy the CSI driver and sidecars for the relevant Kubernetes version.

+
+

Uninstalling the CSI driver when installed manually

+

The manifests below create a number of objects, including CustomResourceDefinitions (CRDs) which may hold critical information about storage resources. Simply deleting the below manifests in order to uninstall the CSI driver may render PersistentVolumes unusable.

+
+

Common

+

These object configuration files are common for all versions of Kubernetes.

+

All components below are deployed in the "hpe-storage" Namespace.

+

kubectl create ns hpe-storage
+

+

Worker node IO settings and common CRDs:

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-linux-config.yaml
+kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-volumegroup-snapshotgroup-crds.yaml
+

+

Container Storage Provider:

+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/nimble-csp.yaml
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-csp.yaml
+kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-crd.yaml
+
+
+

Important

+

The above instructions assumes you have an array with a supported platform OS installed. Please see the requirements section of the respective CSP.

+
+

Install the CSI driver:

+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.29.yaml
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.28.yaml
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.27.yaml
+
kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.26.yaml
+
+
+

Seealso

+

Older and unsupported versions of Kubernetes and the CSI driver are archived on this page.

+
+

Depending on which version is being deployed, different API objects gets created. Next step: Add an HPE Storage Backend.

+

Advanced Uninstall

+

The following steps outline how to uninstall the CSI driver that has been deployed using the Advanced Install above.

+

Uninstall Worker node settings:

+

kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-linux-config.yaml
+

+

Uninstall relevant Container Storage Provider:

+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/nimble-csp.yaml
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-csp.yaml
+
+
+

HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR users

+

If you are reinstalling the HPE CSI Driver, DO NOT remove the crd/hpevolumeinfos.storage.hpe.com resource. This CustomResourceDefinition contains important volume metadata used by the HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR CSP. HPE CSI Driver v2.0.0 and below share the same YAML file for crds and CSP and would require a manual removal of the individual Service and Deployment in the "hpe-storage" Namespace.

+
+

Uninstall the CSI driver:

+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.29.yaml
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.28.yaml
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.27.yaml
+
kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.26.yaml
+
+

If no longer needed, delete the "hpe-storage" Namespace.

+

kubectl delete ns hpe-storage
+

+

Downgrading the CSI driver

+

Downgrading the CSI driver is currently not supported. It will work between certain minor versions. HPE does not test or document procedures to downgrade between incompatible versions.

+ +
+
+ + + +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/csi_driver/diagnostics.html b/csi_driver/diagnostics.html new file mode 100644 index 00000000..0b69b77d --- /dev/null +++ b/csi_driver/diagnostics.html @@ -0,0 +1,456 @@ + + + + + + + + + + + + + + + + + + Diagnostics - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • HPE CSI DRIVER FOR KUBERNETES »
  • +
  • Diagnostics
  • +
  • +
  • +
+
+
+
+
+ +

Introduction

+

It's recommended to familiarize yourself with inspecting workloads on Kubernetes. This particular cheat sheet is very useful to have readily available.

+

Sanity Checks

+

Once the CSI driver has been deployed either through object configuration files, Helm or an Operator. This view should be representative of what a healthy system should look like after install. If any of the workload deployments lists anything but Running, proceed to inspect the logs of the problematic workload.

+
kubectl get pods --all-namespaces -l 'app in (nimble-csp, hpe-csi-node, hpe-csi-controller)'
+NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
+hpe-storage   hpe-csi-controller-7d9cd6b855-zzmd9   9/9     Running   0          15s
+hpe-storage   hpe-csi-node-dk5t4                    2/2     Running   0          15s
+hpe-storage   hpe-csi-node-pwq2d                    2/2     Running   0          15s
+hpe-storage   nimble-csp-546c9c4dd4-5lsdt           1/1     Running   0          15s
+
kubectl get pods --all-namespaces -l 'app in (primera3par-csp, hpe-csi-node, hpe-csi-controller)'
+NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
+hpe-storage   hpe-csi-controller-7d9cd6b855-fqppd   9/9     Running   0          14s
+hpe-storage   hpe-csi-node-86kh6                    2/2     Running   0          14s
+hpe-storage   hpe-csi-node-k8p4p                    2/2     Running   0          14s
+hpe-storage   hpe-csi-node-r2mg8                    2/2     Running   0          14s
+hpe-storage   hpe-csi-node-vwb5r                    2/2     Running   0          14s
+hpe-storage   primera3par-csp-546c9c4dd4-bcwc6      1/1     Running   0          14s
+
+

A Custom Resource Definition (CRD) named hpenodeinfos.storage.hpe.com holds important network and host initiator information.

+

Retrieve list of nodes.

+

kubectl get hpenodeinfos
+$ kubectl get hpenodeinfos
+NAME               AGE
+tme-lnx-worker1   57m
+tme-lnx-worker3   57m
+tme-lnx-worker2   57m
+tme-lnx-worker4   57m
+

+

Inspect a node.

+

kubectl get hpenodeinfos/tme-lnx-worker1 -o yaml
+apiVersion: storage.hpe.com/v1
+kind: HPENodeInfo
+metadata:
+  creationTimestamp: "2020-08-24T23:50:09Z"
+  generation: 1
+  managedFields:
+  - apiVersion: storage.hpe.com/v1
+    fieldsType: FieldsV1
+    fieldsV1:
+      f:spec:
+        .: {}
+        f:chap_password: {}
+        f:chap_user: {}
+        f:iqns: {}
+        f:networks: {}
+        f:uuid: {}
+    manager: csi-driver
+    operation: Update
+    time: "2020-08-24T23:50:09Z"
+  name: tme-lnx-worker1
+  resourceVersion: "30337986"
+  selfLink: /apis/storage.hpe.com/v1/hpenodeinfos/tme-lnx-worker1
+  uid: 3984752b-29ac-48de-8ca0-8381532cbf06
+spec:
+  chap_password: RGlkIHlvdSByZWFsbHkgZGVjb2RlIHRoaXM/
+  chap_user: chap-user
+  iqns:
+  - iqn.1994-05.com.redhat:828e7a4eef40
+  networks:
+  - 10.2.2.2/16
+  - 172.16.6.115/24
+  - 172.16.8.115/24
+  - 172.17.0.1/16
+  - 10.1.1.0/12
+  uuid: 0242f811-3995-746d-652d-6c6e78352d77
+

+

NFS Server Provisioner Resources

+

The NFS Server Provisioner consists of a number of Kubernetes resources per PVC. The default Namespace where the resources are deployed is "hpe-nfs" but is configurable in the StorageClass. See base StorageClass parameters for more details.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ObjectName                          Purpose
ConfigMaphpe-nfs-configThis ConfigMap holds the configuration file for the NFS server. Local tweaks may be wanted. Please see the config file reference for more details.
Deploymenthpe-nfs-UIDThe Deployment that is running the NFS Pod.
Servicehpe-nfs-UIDThe Service the NFS clients perform mounts against.
PVChpe-nfs-UIDThe RWO claim serving the NFS workload.
+
+

Tip

+

The UID stems from the user request RWX PVC for easy tracking. Use kubectl get pvc/my-pvc -o jsonpath='{.metadata.uid}{"\n"}' to retrieve it.

+
+

Tracing NFS resources

+

When troubleshooting NFS deployments it's common that only the source RWX PVC and Namespace is known. The next few steps explains how resources can be easily traced.

+

Retrieve the "hpe-nfs-UID" from the NFS Pod by specifying PVC and Namespace of the RWX PVC:

+

kubectl get pods -l provisioned-by=my-pvc,provisioned-from=my-namespace -A -o jsonpath='{.items[].metadata.labels.app}{"\n"}'
+

+

Next, enumerate the resources from the "hpe-nfs-UID":

+

kubectl get pvc,svc,deploy -A -o name --field-selector metadata.name=hpe-nfs-UID
+

+

Example output:

+

persistentvolumeclaim/hpe-nfs-UID
+service/hpe-nfs-UID
+deployment.apps/hpe-nfs-UID
+

+

If only the PV name is known, looking from the backend storage perspective, the PV name (and .spec.claimRef.uid) contains the UID, for example: "pvc-UID".

+
+

Clarification

+

The hpe-nfs-UID is abbreviated, it will contain a real UID added on, for example "hpe-nfs-98ce7c80-13f9-45d0-9609-089227bf97f1".

+
+

Volume and Snapshot Groups

+

If there's issues with VolumeSnapshots not being created when performing SnapshotGroup snapshots, checking the logs of the "csi-volume-group-provisioner" and "csi-volume-group-snapshotter" in the "hpe-csi-controller" Deployment.

+

kubectl logs -n hpe-storage deploy/hpe-csi-controller csi-volume-group-provisioner
+kubectl logs -n hpe-storage deploy/hpe-csi-controller csi-volume-group-snapshotter
+

+

Logging

+

Log files associated with the HPE CSI Driver logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution for Kubernetes such as Fluentd. Some of the logs on the host are persisted which follow standard logrotate policies.

+

CSI Driver Logs

+

Node driver:

+

kubectl logs -f  daemonset.apps/hpe-csi-node  hpe-csi-driver -n hpe-storage
+

+

Controller driver:

+

kubectl logs -f deployment.apps/hpe-csi-controller hpe-csi-driver -n hpe-storage
+

+
+

Tip

+

The logs for both node and controller drivers are persisted at /var/log/hpe-csi.log

+
+

Log Level

+

Log levels for both CSI Controller and Node driver can be controlled using LOG_LEVEL environment variable. Possible values are info, warn, error, debug, and trace. Apply the changes using kubectl apply -f <yaml> command after adding this to CSI controller and node container spec as below. For Helm charts this is controlled through logLevel variable in values.yaml.

+

          env:
+            - name: LOG_LEVEL
+              value: trace
+

+

CSP Logs

+

CSP logs can be accessed from their respective services.

+
kubectl logs -f deploy/nimble-csp -n hpe-storage
+
kubectl logs -f deploy/primera3par-csp -n hpe-storage
+
+

Log Collector

+

Log collector script hpe-logcollector.sh can be used to collect the logs from any node which has kubectl access to the cluster.

+

curl -O https://raw.githubusercontent.com/hpe-storage/csi-driver/master/hpe-logcollector.sh
+chmod 555 hpe-logcollector.sh
+

+

Usage:

+

./hpe-logcollector.sh -h
+Collect HPE storage diagnostic logs using kubectl.
+
+Usage:
+     hpe-logcollector.sh [-h|--help] [--node-name NODE_NAME] \
+                         [-n|--namespace NAMESPACE] [-a|--all]
+Options:
+-h|--help                  Print this usage text
+--node-name NODE_NAME      Collect logs only for Kubernetes node
+                           NODE_NAME
+-n|--namespace NAMESPACE   Collect logs from HPE CSI deployment in namespace
+                           NAMESPACE (default: kube-system)
+-a|--all                   Collect logs from all nodes (the default)
+

+

Tuning

+

HPE provides a set of well tested defaults for the CSI driver and all the supported CSPs. In certain case it may be necessary to fine tune the CSI driver to accommodate a certain workload or behavior.

+

Data Path Configuration

+

The HPE CSI Driver for Kubernetes automatically configures Linux iSCSI/multipath settings based on config.json. In order to tune these values, edit the config map with kubectl edit configmap hpe-linux-config -n hpe-storage and restart node plugin using kubectl delete pod -l app=hpe-csi-node to apply.

+
+

Important

+

HPE provide a set of general purpose default values for the IO paths, tuning is only required if prescribed by HPE.

+
+ +
+
+ + + +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml b/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml new file mode 100644 index 00000000..f2cd9c36 --- /dev/null +++ b/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml @@ -0,0 +1,39 @@ +apiVersion: storage.hpe.com/v1 +kind: HPECSIDriver +metadata: + name: hpecsidriver-sample +spec: + # Default values copied from /helm-charts/hpe-csi-driver/values.yaml + controller: + affinity: {} + labels: {} + nodeSelector: {} + tolerations: [] + csp: + affinity: {} + labels: {} + nodeSelector: {} + tolerations: [] + disable: + alletra6000: false + alletra9000: false + alletraStorageMP: false + nimble: false + primera: false + disableNodeConfiguration: false + disableNodeConformance: false + disableNodeGetVolumeStats: false + imagePullPolicy: IfNotPresent + iscsi: + chapPassword: "" + chapUser: "" + kubeletRootDir: /var/lib/kubelet/ + logLevel: info + node: + affinity: {} + labels: {} + nodeSelector: {} + tolerations: [] + registry: quay.io + + diff --git a/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml b/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml new file mode 100644 index 00000000..f2cd9c36 --- /dev/null +++ b/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml @@ -0,0 +1,39 @@ +apiVersion: storage.hpe.com/v1 +kind: HPECSIDriver +metadata: + name: hpecsidriver-sample +spec: + # Default values copied from /helm-charts/hpe-csi-driver/values.yaml + controller: + affinity: {} + labels: {} + nodeSelector: {} + tolerations: [] + csp: + affinity: {} + labels: {} + nodeSelector: {} + tolerations: [] + disable: + alletra6000: false + alletra9000: false + alletraStorageMP: false + nimble: false + primera: false + disableNodeConfiguration: false + disableNodeConformance: false + disableNodeGetVolumeStats: false + imagePullPolicy: IfNotPresent + iscsi: + chapPassword: "" + chapUser: "" + kubeletRootDir: /var/lib/kubelet/ + logLevel: info + node: + affinity: {} + labels: {} + nodeSelector: {} + tolerations: [] + registry: quay.io + + diff --git a/csi_driver/examples/deployment/hpecsidriver-v2.5.0-sample.yaml b/csi_driver/examples/deployment/hpecsidriver-v2.5.0-sample.yaml new file mode 100644 index 00000000..5ccd01dd --- /dev/null +++ b/csi_driver/examples/deployment/hpecsidriver-v2.5.0-sample.yaml @@ -0,0 +1,73 @@ +apiVersion: storage.hpe.com/v1 +kind: HPECSIDriver +metadata: + name: hpecsidriver-sample +spec: + # Default values copied from /helm-charts/hpe-csi-driver/values.yaml + controller: + affinity: {} + labels: {} + nodeSelector: {} + resources: + limits: + cpu: 2000m + memory: 1Gi + requests: + cpu: 100m + memory: 128Mi + tolerations: [] + csp: + affinity: {} + labels: {} + nodeSelector: {} + resources: + limits: + cpu: 2000m + memory: 1Gi + requests: + cpu: 100m + memory: 128Mi + tolerations: [] + disable: + alletra6000: false + alletra9000: false + alletraStorageMP: false + nimble: false + primera: false + disableHostDeletion: false + disableNodeConfiguration: false + disableNodeConformance: false + disableNodeGetVolumeStats: false + disableNodeMonitor: false + imagePullPolicy: IfNotPresent + images: + csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1 + csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0 + csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7 + csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0 + csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1 + csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1 + csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1 + csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1 + csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6 + csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6 + csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6 + nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5 + nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0 + primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0 + iscsi: + chapSecretName: "" + kubeletRootDir: /var/lib/kubelet + logLevel: info + node: + affinity: {} + labels: {} + nodeSelector: {} + resources: + limits: + cpu: 2000m + memory: 1Gi + requests: + cpu: 100m + memory: 128Mi + tolerations: [] diff --git a/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml b/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml new file mode 100644 index 00000000..5ccd01dd --- /dev/null +++ b/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml @@ -0,0 +1,73 @@ +apiVersion: storage.hpe.com/v1 +kind: HPECSIDriver +metadata: + name: hpecsidriver-sample +spec: + # Default values copied from /helm-charts/hpe-csi-driver/values.yaml + controller: + affinity: {} + labels: {} + nodeSelector: {} + resources: + limits: + cpu: 2000m + memory: 1Gi + requests: + cpu: 100m + memory: 128Mi + tolerations: [] + csp: + affinity: {} + labels: {} + nodeSelector: {} + resources: + limits: + cpu: 2000m + memory: 1Gi + requests: + cpu: 100m + memory: 128Mi + tolerations: [] + disable: + alletra6000: false + alletra9000: false + alletraStorageMP: false + nimble: false + primera: false + disableHostDeletion: false + disableNodeConfiguration: false + disableNodeConformance: false + disableNodeGetVolumeStats: false + disableNodeMonitor: false + imagePullPolicy: IfNotPresent + images: + csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1 + csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0 + csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7 + csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0 + csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1 + csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1 + csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1 + csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1 + csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6 + csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6 + csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6 + nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5 + nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0 + primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0 + iscsi: + chapSecretName: "" + kubeletRootDir: /var/lib/kubelet + logLevel: info + node: + affinity: {} + labels: {} + nodeSelector: {} + resources: + limits: + cpu: 2000m + memory: 1Gi + requests: + cpu: 100m + memory: 128Mi + tolerations: [] diff --git a/csi_driver/examples/operations/iscsid.conf b/csi_driver/examples/operations/iscsid.conf new file mode 100644 index 00000000..7c6db886 --- /dev/null +++ b/csi_driver/examples/operations/iscsid.conf @@ -0,0 +1,27 @@ +iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket +node.startup = manual +node.leading_login = No +node.session.timeo.replacement_timeout = 10 +node.conn[0].timeo.login_timeout = 15 +node.conn[0].timeo.logout_timeout = 15 +node.conn[0].timeo.noop_out_interval = 5 +node.conn[0].timeo.noop_out_timeout = 10 +node.session.err_timeo.abort_timeout = 15 +node.session.err_timeo.lu_reset_timeout = 30 +node.session.err_timeo.tgt_reset_timeout = 30 +node.session.initial_login_retry_max = 8 +node.session.cmds_max = 512 +node.session.queue_depth = 256 +node.session.xmit_thread_priority = -20 +node.session.iscsi.InitialR2T = No +node.session.iscsi.ImmediateData = Yes +node.session.iscsi.FirstBurstLength = 262144 +node.session.iscsi.MaxBurstLength = 16776192 +node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 +node.conn[0].iscsi.MaxXmitDataSegmentLength = 0 +discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 +node.conn[0].iscsi.HeaderDigest = None +node.session.nr_sessions = 1 +node.session.reopen_max = 0 +node.session.iscsi.FastAbort = Yes +node.session.scan = auto diff --git a/csi_driver/examples/operations/multipath.conf b/csi_driver/examples/operations/multipath.conf new file mode 100644 index 00000000..77e77ba9 --- /dev/null +++ b/csi_driver/examples/operations/multipath.conf @@ -0,0 +1,83 @@ +defaults { + user_friendly_names yes + find_multipaths no + uxsock_timeout 10000 +} +blacklist { + devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" + devnode "^hd[a-z]" + device { + product ".*" + vendor ".*" + } +} +blacklist_exceptions { + property "(ID_WWN|SCSI_IDENT_.*|ID_SERIAL)" + device { + vendor "Nimble" + product "Server" + } + device { + product "VV" + vendor "3PARdata" + } + device { + vendor "TrueNAS" + product "iSCSI Disk" + } + device { + vendor "FreeNAS" + product "iSCSI Disk" + } +} +devices { + device { + product "Server" + rr_min_io_rq 1 + dev_loss_tmo infinity + path_checker tur + rr_weight uniform + no_path_retry 30 + path_selector "service-time 0" + failback immediate + fast_io_fail_tmo 5 + vendor "Nimble" + hardware_handler "1 alua" + path_grouping_policy group_by_prio + prio alua + } + device { + path_grouping_policy group_by_prio + path_checker tur + rr_weight "uniform" + prio alua + failback immediate + hardware_handler "1 alua" + no_path_retry 18 + fast_io_fail_tmo 10 + path_selector "round-robin 0" + vendor "3PARdata" + dev_loss_tmo infinity + detect_prio yes + features "0" + rr_min_io_rq 1 + product "VV" + } + device { + path_selector "queue-length 0" + rr_weight priorities + uid_attribute ID_SERIAL + vendor "TrueNAS" + product "iSCSI Disk" + path_grouping_policy group_by_prio + } + device { + path_selector "queue-length 0" + hardware_handler "1 alua" + rr_weight priorities + uid_attribute ID_SERIAL + vendor "FreeNAS" + product "iSCSI Disk" + path_grouping_policy group_by_prio + } +} diff --git a/csi_driver/examples/operations/patch-nfs-server-2.2.0.yaml b/csi_driver/examples/operations/patch-nfs-server-2.2.0.yaml new file mode 100644 index 00000000..f8aad41a --- /dev/null +++ b/csi_driver/examples/operations/patch-nfs-server-2.2.0.yaml @@ -0,0 +1,15 @@ +spec: + template: + spec: + containers: + - name: hpe-nfs + image: quay.io/hpestorage/nfs-provisioner:v3.0.0 + tolerations: + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: 30 + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: 30 diff --git a/csi_driver/examples/operations/patch-nfs-server-2.3.0.yaml b/csi_driver/examples/operations/patch-nfs-server-2.3.0.yaml new file mode 100644 index 00000000..b2c3c7c1 --- /dev/null +++ b/csi_driver/examples/operations/patch-nfs-server-2.3.0.yaml @@ -0,0 +1,15 @@ +spec: + template: + spec: + containers: + - name: hpe-nfs + image: quay.io/hpestorage/nfs-provisioner:v3.0.1 + tolerations: + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: 30 + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: 30 diff --git a/csi_driver/examples/operations/patch-nfs-server-2.4.0.yaml b/csi_driver/examples/operations/patch-nfs-server-2.4.0.yaml new file mode 100644 index 00000000..4f5c2c28 --- /dev/null +++ b/csi_driver/examples/operations/patch-nfs-server-2.4.0.yaml @@ -0,0 +1,18 @@ +spec: + template: + spec: + containers: + - name: hpe-nfs + image: quay.io/hpestorage/nfs-provisioner:v3.0.2 + tolerations: + - effect: NoSchedule + key: csi.hpe.com/hpe-nfs + operator: Exists + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: 30 + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: 30 diff --git a/csi_driver/examples/operations/patch-nfs-server-2.4.1.yaml b/csi_driver/examples/operations/patch-nfs-server-2.4.1.yaml new file mode 100644 index 00000000..41e0fab2 --- /dev/null +++ b/csi_driver/examples/operations/patch-nfs-server-2.4.1.yaml @@ -0,0 +1,18 @@ +spec: + template: + spec: + containers: + - name: hpe-nfs + image: quay.io/hpestorage/nfs-provisioner:v3.0.3 + tolerations: + - effect: NoSchedule + key: csi.hpe.com/hpe-nfs + operator: Exists + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: 30 + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: 30 diff --git a/csi_driver/examples/operations/patch-nfs-server-2.5.0.yaml b/csi_driver/examples/operations/patch-nfs-server-2.5.0.yaml new file mode 100644 index 00000000..eaf41997 --- /dev/null +++ b/csi_driver/examples/operations/patch-nfs-server-2.5.0.yaml @@ -0,0 +1,18 @@ +spec: + template: + spec: + containers: + - name: hpe-nfs + image: quay.io/hpestorage/nfs-provisioner:v3.0.5 + tolerations: + - effect: NoSchedule + key: csi.hpe.com/hpe-nfs + operator: Exists + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: 30 + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: 30 diff --git a/csi_driver/examples/operations/pvc-copy-block.yaml b/csi_driver/examples/operations/pvc-copy-block.yaml new file mode 100644 index 00000000..2e3332f2 --- /dev/null +++ b/csi_driver/examples/operations/pvc-copy-block.yaml @@ -0,0 +1,31 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: pvc-copy-block +spec: + template: + spec: + containers: + - name: ddrescue + image: quay.io/hpestorage/pvc-copy-util:v1.0 + imagePullPolicy: IfNotPresent + volumeDevices: + - name: src-pv + devicePath: /dev/xvd-src + - name: dst-pv + devicePath: /dev/xvd-dst + command: + - ddrescue + args: + - -f + - -n + - /dev/xvd-src + - /dev/xvd-dst + restartPolicy: Never + volumes: + - name: src-pv + persistentVolumeClaim: + claimName: old-pvc + - name: dst-pv + persistentVolumeClaim: + claimName: new-pvc diff --git a/csi_driver/examples/operations/pvc-copy-file.yaml b/csi_driver/examples/operations/pvc-copy-file.yaml new file mode 100644 index 00000000..cd2087c7 --- /dev/null +++ b/csi_driver/examples/operations/pvc-copy-file.yaml @@ -0,0 +1,34 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: pvc-copy-file +spec: + template: + spec: + containers: + - name: rsync + image: quay.io/hpestorage/pvc-copy-util:v1.0 + imagePullPolicy: IfNotPresent + volumeMounts: + - name: src-pv + mountPath: /src + - name: dst-pv + mountPath: /dst + command: + - rsync + args: + - -avzh + - -O + - --progress + - --append-verify + - --numeric-ids + - /src/ + - /dst + restartPolicy: Never + volumes: + - name: src-pv + persistentVolumeClaim: + claimName: old-pvc + - name: dst-pv + persistentVolumeClaim: + claimName: new-pvc diff --git a/csi_driver/examples/operations/pvc-copy.yaml b/csi_driver/examples/operations/pvc-copy.yaml new file mode 100644 index 00000000..26f04a93 --- /dev/null +++ b/csi_driver/examples/operations/pvc-copy.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: "" +spec: + volumeMode: "" + volumeName: "" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: "" diff --git a/csi_driver/examples/standalone_nfs/base/configmap.yaml b/csi_driver/examples/standalone_nfs/base/configmap.yaml new file mode 100644 index 00000000..c51e08a3 --- /dev/null +++ b/csi_driver/examples/standalone_nfs/base/configmap.yaml @@ -0,0 +1,33 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: hpe-nfs-conf +data: + ganesha.conf: |2- + + NFS_Core_Param + { + NFS_Protocols= 4; + NFS_Port = 2049; + fsid_device = false; + } + NFSv4 + { + Graceless = true; + UseGetpwnam = true; + DomainName = "$(CLUSTER_NODE_DOMAIN_NAME)"; + } + EXPORT + { + Export_Id = 716; + Path = /export; + Pseudo = /export; + Access_Type = RW; + Squash = No_Root_Squash; + Transports = TCP; + Protocols = 4; + SecType = "sys"; + FSAL { + Name = VFS; + } + } diff --git a/csi_driver/examples/standalone_nfs/base/deployment.yaml b/csi_driver/examples/standalone_nfs/base/deployment.yaml new file mode 100644 index 00000000..c18fca9c --- /dev/null +++ b/csi_driver/examples/standalone_nfs/base/deployment.yaml @@ -0,0 +1,124 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: hpe-nfs +spec: + replicas: 1 + selector: + matchLabels: + app: $(SERVICE_SELECTOR) + spread-by: hpe-nfs + strategy: + type: Recreate + template: + metadata: + annotations: + tags: hpe-nfs + labels: + app: $(SERVICE_SELECTOR) + spread-by: hpe-nfs + name: hpe-nfs + spec: + nodeSelector: + csi.hpe.com/hpe-nfs: "true" + containers: + - env: + - name: GANESHA_OPTIONS + value: -N NIV_WARN + image: quay.io/hpestorage/nfs-provisioner:v3.0.2 + imagePullPolicy: IfNotPresent + name: hpe-nfs + ports: + - containerPort: 49000 + name: grpc + protocol: TCP + - containerPort: 2049 + name: nfs-tcp + protocol: TCP + - containerPort: 2049 + name: nfs-udp + protocol: UDP + - containerPort: 32803 + name: nlockmgr-tcp + protocol: TCP + - containerPort: 32803 + name: nlockmgr-udp + protocol: UDP + - containerPort: 20048 + name: mountd-tcp + protocol: TCP + - containerPort: 20048 + name: mountd-udp + protocol: UDP + - containerPort: 111 + name: portmapper-tcp + protocol: TCP + - containerPort: 111 + name: portmapper-udp + protocol: UDP + - containerPort: 662 + name: statd-tcp + protocol: TCP + - containerPort: 662 + name: statd-udp + protocol: UDP + - containerPort: 875 + name: rquotad-tcp + protocol: TCP + - containerPort: 875 + name: rquotad-udp + protocol: UDP + resources: + limits: + cpu: "$(NFS_SERVER_CPU_LIMIT)" + memory: $(NFS_SERVER_MEMORY_LIMIT) + securityContext: + capabilities: + add: + - SYS_ADMIN + - DAC_READ_SEARCH + privileged: true + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /export + name: hpe-nfs + - mountPath: /etc/ganesha.conf + name: hpe-nfs-conf + subPath: ganesha.conf + dnsPolicy: ClusterFirst + priorityClassName: system-cluster-critical + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 + tolerations: + - effect: NoSchedule + key: csi.hpe.com/hpe-nfs + operator: Exists + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: 30 + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: 30 + topologySpreadConstraints: + - labelSelector: + matchLabels: + spread-by: hpe-nfs + maxSkew: 1 + topologyKey: node + whenUnsatisfiable: ScheduleAnyway + volumes: + - name: hpe-nfs + persistentVolumeClaim: + claimName: hpe-nfs + - configMap: + defaultMode: 420 + items: + - key: ganesha.conf + path: ganesha.conf + name: hpe-nfs-conf + name: hpe-nfs-conf diff --git a/csi_driver/examples/standalone_nfs/base/environment.properties b/csi_driver/examples/standalone_nfs/base/environment.properties new file mode 100644 index 00000000..3c85d118 --- /dev/null +++ b/csi_driver/examples/standalone_nfs/base/environment.properties @@ -0,0 +1,9 @@ +# This is the domain associated with worker node (not inter-cluster DNS) +CLUSTER_NODE_DOMAIN_NAME=my-domain.example.com + +# The size of the backend RWO claim +PERSISTENCE_SIZE=16Gi + +# Default resource limits for the NFS server +NFS_SERVER_CPU_LIMIT=1 +NFS_SERVER_MEMORY_LIMIT=2Gi diff --git a/csi_driver/examples/standalone_nfs/base/kustomization.yaml b/csi_driver/examples/standalone_nfs/base/kustomization.yaml new file mode 100644 index 00000000..faed3e7e --- /dev/null +++ b/csi_driver/examples/standalone_nfs/base/kustomization.yaml @@ -0,0 +1,56 @@ +# HPE NFS server resources +resources: +- pvc.yaml +- service.yaml +- configmap.yaml +- deployment.yaml + +# Environment specific properties, do not edit in base. +configMapGenerator: +- name: local-conf + envs: + - environment.properties + +# Where variables needs to be interpolated. +configurations: +- values.yaml + +# Manifest variables +vars: +- name: PERSISTENCE_SIZE + objref: + kind: ConfigMap + name: local-conf + apiVersion: v1 + fieldref: + fieldpath: data.PERSISTENCE_SIZE + +- name: CLUSTER_NODE_DOMAIN_NAME + objref: + kind: ConfigMap + name: local-conf + apiVersion: v1 + fieldref: + fieldpath: data.CLUSTER_NODE_DOMAIN_NAME + +- name: SERVICE_SELECTOR + objref: + kind: Deployment + name: hpe-nfs + apiVersion: apps/v1 + +- name: NFS_SERVER_CPU_LIMIT + objref: + kind: ConfigMap + name: local-conf + apiVersion: v1 + fieldref: + fieldpath: data.NFS_SERVER_CPU_LIMIT + +- name: NFS_SERVER_MEMORY_LIMIT + objref: + kind: ConfigMap + name: local-conf + apiVersion: v1 + fieldref: + fieldpath: data.NFS_SERVER_MEMORY_LIMIT diff --git a/csi_driver/examples/standalone_nfs/base/pvc.yaml b/csi_driver/examples/standalone_nfs/base/pvc.yaml new file mode 100644 index 00000000..7ddeff29 --- /dev/null +++ b/csi_driver/examples/standalone_nfs/base/pvc.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: hpe-nfs +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: $(PERSISTENCE_SIZE) diff --git a/csi_driver/examples/standalone_nfs/base/service.yaml b/csi_driver/examples/standalone_nfs/base/service.yaml new file mode 100644 index 00000000..5a71655e --- /dev/null +++ b/csi_driver/examples/standalone_nfs/base/service.yaml @@ -0,0 +1,62 @@ +apiVersion: v1 +kind: Service +metadata: + name: hpe-nfs +spec: + ports: + - name: grpc + port: 49000 + protocol: TCP + targetPort: 49000 + - name: nfs-tcp + port: 2049 + protocol: TCP + targetPort: 2049 + - name: nfs-udp + port: 2049 + protocol: UDP + targetPort: 2049 + - name: nlockmgr-tcp + port: 32803 + protocol: TCP + targetPort: 32803 + - name: nlockmgr-udp + port: 32803 + protocol: UDP + targetPort: 32803 + - name: mountd-tcp + port: 20048 + protocol: TCP + targetPort: 20048 + - name: mountd-udp + port: 20048 + protocol: UDP + targetPort: 20048 + - name: portmapper-tcp + port: 111 + protocol: TCP + targetPort: 111 + - name: portmapper-udp + port: 111 + protocol: UDP + targetPort: 111 + - name: statd-tcp + port: 662 + protocol: TCP + targetPort: 662 + - name: statd-udp + port: 662 + protocol: UDP + targetPort: 662 + - name: rquotad-tcp + port: 875 + protocol: TCP + targetPort: 875 + - name: rquotad-udp + port: 875 + protocol: UDP + targetPort: 875 + selector: + app: $(SERVICE_SELECTOR) + sessionAffinity: None + type: ClusterIP diff --git a/csi_driver/examples/standalone_nfs/base/values.yaml b/csi_driver/examples/standalone_nfs/base/values.yaml new file mode 100644 index 00000000..0ae6ccde --- /dev/null +++ b/csi_driver/examples/standalone_nfs/base/values.yaml @@ -0,0 +1,17 @@ +varReference: +- kind: PersistentVolumeClaim + path: spec/resources/requests/storage +- kind: ConfigMap + path: data +- kind: Deployment + path: spec/selector/matchLabels/app +- kind: Deployment + path: spec/selector/matchLabels/app +- kind: Deployment + path: spec/template/metadata/labels/app +- kind: Deployment + path: spec/template/spec/containers/resources/limits/cpu +- kind: Deployment + path: spec/template/spec/containers/resources/limits/memory +- kind: Service + path: spec/selector/app diff --git a/csi_driver/examples/standalone_nfs/overlays/example/deployment.yaml b/csi_driver/examples/standalone_nfs/overlays/example/deployment.yaml new file mode 100644 index 00000000..0199994c --- /dev/null +++ b/csi_driver/examples/standalone_nfs/overlays/example/deployment.yaml @@ -0,0 +1,10 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: hpe-nfs +spec: + template: + spec: + securityContext: + fsGroup: 65534 + fsGroupChangePolicy: OnRootMismatch diff --git a/csi_driver/examples/standalone_nfs/overlays/example/environment.properties b/csi_driver/examples/standalone_nfs/overlays/example/environment.properties new file mode 100644 index 00000000..3c85d118 --- /dev/null +++ b/csi_driver/examples/standalone_nfs/overlays/example/environment.properties @@ -0,0 +1,9 @@ +# This is the domain associated with worker node (not inter-cluster DNS) +CLUSTER_NODE_DOMAIN_NAME=my-domain.example.com + +# The size of the backend RWO claim +PERSISTENCE_SIZE=16Gi + +# Default resource limits for the NFS server +NFS_SERVER_CPU_LIMIT=1 +NFS_SERVER_MEMORY_LIMIT=2Gi diff --git a/csi_driver/examples/standalone_nfs/overlays/example/kustomization.yaml b/csi_driver/examples/standalone_nfs/overlays/example/kustomization.yaml new file mode 100644 index 00000000..25175fb9 --- /dev/null +++ b/csi_driver/examples/standalone_nfs/overlays/example/kustomization.yaml @@ -0,0 +1,20 @@ +# Change the prefix something unique in the Namespace +namePrefix: example- + +# Existing Namespace, if applicable +# namespace: example + +# Edit the environment.properties file +configMapGenerator: +- name: local-conf + envs: + - environment.properties + behavior: replace + +# Base manifests +bases: +- ../../base + +# Local trims +patchesStrategicMerge: +- deployment.yaml diff --git a/csi_driver/img/csi_driver_architecture-1.1.0.png b/csi_driver/img/csi_driver_architecture-1.1.0.png new file mode 100644 index 00000000..be8fbc88 Binary files /dev/null and b/csi_driver/img/csi_driver_architecture-1.1.0.png differ diff --git a/csi_driver/img/csi_driver_architecture-1.2.0.png b/csi_driver/img/csi_driver_architecture-1.2.0.png new file mode 100644 index 00000000..a3089361 Binary files /dev/null and b/csi_driver/img/csi_driver_architecture-1.2.0.png differ diff --git a/csi_driver/img/csi_driver_architecture-1.3.0.png b/csi_driver/img/csi_driver_architecture-1.3.0.png new file mode 100644 index 00000000..ad9ad467 Binary files /dev/null and b/csi_driver/img/csi_driver_architecture-1.3.0.png differ diff --git a/csi_driver/img/csi_driver_architecture-1.4.0.png b/csi_driver/img/csi_driver_architecture-1.4.0.png new file mode 100644 index 00000000..ca9c823c Binary files /dev/null and b/csi_driver/img/csi_driver_architecture-1.4.0.png differ diff --git a/csi_driver/img/csi_driver_architecture-2.0.0.png b/csi_driver/img/csi_driver_architecture-2.0.0.png new file mode 100644 index 00000000..269f58a1 Binary files /dev/null and b/csi_driver/img/csi_driver_architecture-2.0.0.png differ diff --git a/csi_driver/img/csi_driver_architecture-2.1.0.png b/csi_driver/img/csi_driver_architecture-2.1.0.png new file mode 100644 index 00000000..e6bde6d4 Binary files /dev/null and b/csi_driver/img/csi_driver_architecture-2.1.0.png differ diff --git a/csi_driver/img/csi_driver_architecture-2.3.0.png b/csi_driver/img/csi_driver_architecture-2.3.0.png new file mode 100644 index 00000000..51c54643 Binary files /dev/null and b/csi_driver/img/csi_driver_architecture-2.3.0.png differ diff --git a/csi_driver/img/csi_driver_architecture-2.4.1.png b/csi_driver/img/csi_driver_architecture-2.4.1.png new file mode 100644 index 00000000..2b5dfed5 Binary files /dev/null and b/csi_driver/img/csi_driver_architecture-2.4.1.png differ diff --git a/csi_driver/img/helm.png b/csi_driver/img/helm.png new file mode 100644 index 00000000..0a2f716c Binary files /dev/null and b/csi_driver/img/helm.png differ diff --git a/csi_driver/index.html b/csi_driver/index.html new file mode 100644 index 00000000..706d7ff6 --- /dev/null +++ b/csi_driver/index.html @@ -0,0 +1,863 @@ + + + + + + + + + + + + + + + + + + Overview - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • HPE CSI DRIVER FOR KUBERNETES »
  • +
  • Overview
  • +
  • +
  • +
+
+
+
+
+ +

Introduction

+

A Container Storage Interface (CSI) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider (CSP) to perform data management operations on storage resources. The architecture of the CSI driver allows block storage vendors to implement a CSP that follows the specification (a browser friendly version).

+

The CSI driver architecture allows a complete separation of concerns between upstream Kubernetes core, SIG Storage (CSI owners), CSI driver author (HPE) and the backend CSP developer.

+

HPE CSI Driver Architecture

+
+

Tip

+

The HPE CSI Driver for Kubernetes is vendor agnostic. Any entity may leverage the driver and provide their own Container Storage Provider.

+
+

Table of Contents

+ +

Features and Capabilities

+

CSI gradually mature features and capabilities in the specification at the pace of the community. HPE keep a close watch on differentiating features the primary storage family of products may be suitable for implementing in CSI and Kubernetes. HPE experiment early and often. That's why it's sometimes possible to observe a certain feature being available in the CSI driver although it hasn't been announced or isn't documented.

+

Below is the official table for CSI features we track and deem readily available for use after we've officially tested and validated it in the platform matrix.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureK8s maturitySince K8s versionHPE CSI Driver
Dynamic ProvisioningStable1.131.0.0
Volume ExpansionStable1.241.1.0
Volume SnapshotsStable1.201.1.0
PVC Data SourceStable1.181.1.0
Raw Block VolumeStable1.181.2.0
Inline Ephemeral VolumesBeta1.161.2.0
Volume LimitsStable1.171.2.0
Volume Mutator1N/A1.151.3.0
Generic Ephemeral VolumesGA1.231.3.0
Volume Groups1N/A1.171.4.0
Snapshot Groups1N/A1.171.4.0
NFS Server Provisioner1N/A1.171.4.0
Volume Encryption1N/A1.182.0.0
Basic Topology3Stable1.172.5.0
Advanced Topology3Stable1.17Future
Storage Capacity TrackingStable1.24Future
Volume Expansion From SourceStable1.27Future
ReadWriteOncePodStable1.29Future
Volume PopulatorBeta1.24Future
Volume HealthAlpha1.21Future
Cross Namespace SnapshotsAlpha1.26Future
Upstream Volume Group SnapshotAlpha1.27Future
Volume Attribute ClassesAlpha1.29Future
+

+ 1 = HPE CSI Driver for Kubernetes specific CSI sidecar. CSP support may vary.
+ 2 = Alpha features are enabled by Kubernetes feature gates and are not formally supported by HPE.
+ 3 = Topology information can only be used to describe accessibility relationships between a set of nodes and a single backend using a StorageClass. +

+

Depending on the CSP, it may support a number of different snapshotting, cloning and restoring operations by taking advantage of StorageClass parameter overloading. Please see the respective CSP for additional functionality.

+

Refer to the official table of feature gates in the Kubernetes docs to find availability of beta and alpha features. HPE provide limited support on non-GA CSI features. Please file any issues, questions or feature requests here. You may also join our Slack community to chat with HPE folks close to this project. We hang out in #Alletra, #NimbleStorage, #3par-primera and #Kubernetes, sign up at slack.hpedev.io and login at hpedev.slack.com.

+
+

Tip

+

Familiarize yourself with the basic requirements below for running the CSI driver on your Kubernetes cluster. It's then highly recommended to continue installing the CSI driver with either a Helm chart or an Operator.

+
+

Compatibility and Support

+

These are the combinations HPE has tested and can provide official support services around for each of the CSI driver releases. Each Container Storage Provider has it's own requirements in terms of storage platform OS and may have other constraints not listed here.

+
+

Note

+

For Kubernetes 1.12 and earlier please see legacy FlexVolume drivers, do note that the FlexVolume drivers are being deprecated.

+
+

+

HPE CSI Driver for Kubernetes 2.5.0

+

Release highlights:

+
    +
  • Support for Kubernetes 1.30 and OpenShift 4.16
  • +
  • Introducing CSI Topology support for StorageClasses
  • +
  • A "Node Monitor" has been added to improve device management
  • +
  • Support for attempting automatic filesystem repairs in the event of failed mounts ("fsRepair" StorageClass parameter)
  • +
  • Improved handling of iSCSI CHAP credentials
  • +
  • Added "nfsNodeSelector", "nfsResourceRequestsCpuM" and "nfsResourceRequestsMemoryMi" StorageClass parameters
  • +
  • New Helm Chart parameters to control resource requests and limits for node, controller and CSP containers
  • +
  • Reworked image handling in the Helm Chart to improve supportability
  • +
  • Various improvements in accessMode handling
  • +
+

Upgrade considerations:

+ +
+

note

+

HPE CSI Driver v2.5.0 is deployed with v2.5.1 of the Helm chart and Operator

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.27-1.301
Helm Chartv2.5.1 on ArtifactHub
Operators + v2.5.1 on OperatorHub
+ v2.5.1 via OpenShift console +
Worker OS + Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.14-4.16
+ Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04
+ SUSE Linux Enterprise Server 15 SP4, SP5, SP6 and SLE Micro4 equivalents +
Platforms3 + Alletra Storage MP5 10.2.x - 10.4.x
+ Alletra OS 9000 9.3.x - 9.5.x
+ Alletra OS 5000/6000 6.0.0.x - 6.1.2.x
+ Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x
+ Primera OS 4.3.x - 4.5.x
+ 3PAR OS 3.3.x +
Data protocolsFibre Channel, iSCSI
FilesystemsXFS, ext3/ext4, btrfs, NFSv4*
Release notesv2.5.0 on GitHub
Blogs + HPE CSI Driver for Kubernetes 2.5.0: Improved stateful workload resilience and robustness +
+ +

+ * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims for volumeMode: Filesystem.
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. While RHEL 7 and its derives will work, the host OS have been EOL'd and support is limited.
+ 3 = Learn about each data platform's team support commitment.
+ 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start.
+ 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
+

+

HPE CSI Driver for Kubernetes 2.4.2

+

Release highlights:

+
    +
  • Patch release
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.26-1.291
Helm Chartv2.4.2 on ArtifactHub
Operators + v2.4.2 on OperatorHub
+ v2.4.2 via OpenShift console +
Worker OS + Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15
+ Ubuntu 16.04, 18.04, 20.04, 22.04
+ SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro4 equivalents +
Platforms3 + Alletra Storage MP5 10.2.x - 10.4.x
+ Alletra OS 9000 9.3.x - 9.5.x
+ Alletra OS 5000/6000 6.0.0.x - 6.1.2.x
+ Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x
+ Primera OS 4.3.x - 4.5.x
+ 3PAR OS 3.3.x +
Data protocolsFibre Channel, iSCSI
FilesystemsXFS, ext3/ext4, btrfs, NFSv4*
Release notesv2.4.2 on GitHub
+ +

+ * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims.
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
+ 3 = Learn about each data platform's team support commitment.
+ 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start.
+ 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
+

+

HPE CSI Driver for Kubernetes 2.4.1

+

Release highlights:

+
    +
  • HPE Alletra Storage MP support
  • +
  • Kubernetes 1.29 support
  • +
  • Full KubeVirt, OpenShift Virtualization and SUSE Harvester support for HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR
  • +
  • Full ARM64 support for HPE Alletra 5000/6000 and Nimble Storage
  • +
  • Support for foreign StorageClasses with the NFS Server Provisioner
  • +
  • SUSE Linux Enterprise Micro OS (SLE Micro) support
  • +
+

Upgrade considerations:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.26-1.291
Helm Chartv2.4.1 on ArtifactHub
Operators + v2.4.1 on OperatorHub
+ v2.4.1 via OpenShift console +
Worker OS + Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15
+ Ubuntu 16.04, 18.04, 20.04, 22.04
+ SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro4 equivalents +
Platforms3 + Alletra Storage MP5 10.2.x - 10.3.x
+ Alletra OS 9000 9.3.x - 9.5.x
+ Alletra OS 5000/6000 6.0.0.x - 6.1.2.x
+ Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x
+ Primera OS 4.3.x - 4.5.x
+ 3PAR OS 3.3.x +
Data protocolsFibre Channel, iSCSI
FilesystemsXFS, ext3/ext4, btrfs, NFSv4*
Release notesv2.4.1 on GitHub
Blogs + Introducing HPE Alletra Storage MP to HPE CSI Driver for Kubernetes +
+ +

+ * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims.
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
+ 3 = Learn about each data platform's team support commitment.
+ 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start.
+ 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.
+

+

HPE CSI Driver for Kubernetes 2.4.0

+

Release highlights:

+
    +
  • Kubernetes 1.27 and 1.28 support
  • +
  • KubeVirt and OpenShift Virtualization support for Nimble/Alletra 5000/6000
  • +
  • Enhanced scheduling for the NFS Server Provisioner
  • +
  • Multiarch images (Linux ARM64/AMD64) for the CSI driver components and Alletra 9000 CSP
  • +
  • Major updates to SIG Storage images
  • +
+

Upgrade considerations:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Kubernetes1.25-1.281
Helm Chartv2.4.0 on ArtifactHub
Operators + v2.4.0 on OperatorHub
+ v2.4.0 via OpenShift console +
Worker OS + RHEL2 7.x, 8.x, 9.x, RHCOS 4.12-4.14
+ Ubuntu 16.04, 18.04, 20.04, 22.04
+ SLES 15 SP3, SP4, SP5 +
Platforms3 + + Alletra OS 9000 9.3.x - 9.5.x
+ Alletra OS 5000/6000 6.0.0.x - 6.1.1.x
+ Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x
+ Primera OS 4.3.x - 4.5.x
+ 3PAR OS 3.3.x +
Data protocolsFibre Channel, iSCSI
FilesystemsXFS, ext3/ext4, btrfs, NFSv4*
Release notesv2.4.0 on GitHub
Blogs + Introduction to new workload paradigms with HPE CSI Driver for Kubernetes +
+ +

+ * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows "ReadWriteMany" PersistentVolumeClaims.
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.
+ 3 = Learn about each data platform's team support commitment.
+

+

Release Archive

+

HPE currently supports up to three minor releases of the HPE CSI Driver for Kubernetes.

+ +

Known Limitations

+
    +
  • Always check with the Kubernetes vendor distribution which CSI features are available for use and supported by the vendor.
  • +
  • When using Kubernetes in virtual machines on VMware vSphere, OpenStack or similiar, iSCSI is the only supported data protocol for the HPE CSI Driver when using block storage. The CSI driver does not support NPIV.
  • +
  • Ephemeral, transient or non-persistent Kubernetes nodes are not supported unless the /etc/hpe-storage directory persists across node upgrades or reboots. The path is relocatable using a custom Helm chart or deployment manifest by altering the mountPath parameter for the directory.
  • +
  • The CSI driver support a fixed number of volumes per node. Inspect the current limitation by running kubectl get csinodes -o yaml and inspect .spec.drivers.allocatable for "csi.hpe.com". The "count" element contains how many volumes the node can attach from the HPE CSI Driver (default is 100).
  • +
  • The HPE CSI Driver uses host networking for the node driver. Some CNIs have flaky implementations which prevents the CSI driver components to communicate properly. Especially notorious is Flannel on K3s. Use Calico if possible for the widest compatibility.
  • +
  • The NFS Server Provisioner and each of the CSPs have known limitations listed separately.
  • +
+

iSCSI CHAP Considerations

+

If iSCSI CHAP is being used in the environment, consider the following.

+

Existing PVs and iSCSI sessions

+

It's not recommended to retro fit CHAP into an existing environment where PersistentVolumes are already provisioned and attached. If necessary, all iSCSI sessions needs to be logged out from and the CSI driver Helm chart needs to be installed with cluster-wide iSCSI CHAP credentials for iSCSI CHAP to be effective, otherwise existing non-authenticated sessions will be reused.

+

CSI driver 2.5.0 and Above

+

In 2.5.0 and later the CHAP credentials must be supplied by a separate Secret. The Secret may be supplied when installing the Helm Chart (the Secret must exist prior) or referened in the StorageClass.

+

Upgrade Considerations

+

When using CHAP with 2.4.2 or older the CHAP credentials were provided in clear text in the Helm Chart. To continue to use CHAP for those existing PersistentVolumes, a CHAP Secret needs to be created and referenced in the Helm Chart install.

+

New StorageClasses may reference the same Secret, it's recommended to use a different Secret to distinguish legacy and new PersistentVolumes.

+

Enable iSCSI CHAP

+

How to enable iSCSI CHAP in the current version of the HPE CSI Driver is available in the user documentation.

+

CSI driver 1.3.0 to 2.4.2

+

CHAP is an optional part of the initial deployment of the driver with parameters passed to Helm or the Operator. For object definitions, the CHAP_USER and CHAP_PASSWORD needs to be supplied to the csi-node-driver. The CHAP username and secret is picked up in the hpenodeinfo Custom Resource Definition (CRD). The CSP is under contract to create the user if it doesn't exist on the backend.

+

CHAP is a good measure to prevent unauthorized access to iSCSI targets, it does not encrypt data on the wire. CHAP secrets should be at least twelve charcters in length.

+

CSI driver 1.2.1 and Below

+

In version 1.2.1 and below, the CSI driver did not support CHAP natively. CHAP must be enabled manually on the worker nodes before deploying the CSI driver on the cluster. This also needs to be applied to new worker nodes before they join the cluster.

+

Kubernetes Feature Gates

+

Different features mature at different rates. Refer to the official table of feature gates in the Kubernetes docs.

+

The following guidelines appliy to which feature gates got introduced as alphas for the corresponding version of Kubernetes. For example, ExpandCSIVolumes got introduced in 1.14 but is still an alpha in 1.15, hence you need to enable that feature gate in 1.15 as well if you want to use it.

+

Kubernetes 1.13

+
    +
  • --allow-privileged flag must be set to true for the API server
  • +
+

Kubernetes 1.14

+
    +
  • --allow-privileged flag must be set to true for the API server
  • +
  • --feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true feature gate flags must be set to true for both the API server and kubelet for resize support
  • +
+

Kubernetes 1.15

+
    +
  • --allow-privileged flag must be set to true for the API server
  • +
  • --feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true feature gate flags must be set to true for both the API server and kubelet for resize support
  • +
  • --feature-gates=CSIInlineVolume=true feature gate flag must be set to true for both the API server and kubelet for pod inline volumes (Ephemeral Local Volumes) support
  • +
  • --feature-gates=VolumePVCDataSource=true feature gate flag must be set to true for both the API server and kubelet for Volume cloning support
  • +
+

Kubernetes 1.19

+
    +
  • --feature-gates=GenericEphemeralVolume=true feature gate flags needs to be passed to api-server, scheduler, controller-manager and kubelet to enable Generic Ephemeral Volumes
  • +
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/csi_driver/install_legacy.html b/csi_driver/install_legacy.html new file mode 100644 index 00000000..b9b80802 --- /dev/null +++ b/csi_driver/install_legacy.html @@ -0,0 +1,334 @@ + + + + + + + + + + + + + + + + + + Install legacy - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Install legacy
  • +
  • +
  • +
+
+
+
+
+ +

Legacy Versions

+

Older versions of the HPE CSI Driver for Kubernetes are kept here for reference. Check the CSI driver GitHub repo for the appropriate YAML files to declare on the cluster for the respective version of Kubernetes.

+
+

Important

+

The resources for CSPs, CRDs and ConfigMaps are available in each respective CSI driver version directory here. Use the below version mappings as reference.

+
+

Kubernetes 1.25

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.0/hpe-csi-k8s-1.25.yaml
+

+
+

Note

+

Latest supported CSI driver version is 2.4.0 for Kubernetes 1.25.

+
+

Kubernetes 1.24

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.3.0/hpe-csi-k8s-1.24.yaml
+

+
+

Note

+

Latest supported CSI driver version is 2.3.0 for Kubernetes 1.24.

+
+

Kubernetes 1.23

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.3.0/hpe-csi-k8s-1.23.yaml
+

+
+

Note

+

Latest supported CSI driver version is 2.3.0 for Kubernetes 1.23.

+
+

Kubernetes 1.22

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.2.0/hpe-csi-k8s-1.22.yaml
+

+
+

Note

+

Latest supported CSI driver version is 2.2.0 for Kubernetes 1.22.

+
+

Kubernetes 1.21

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.2.0/hpe-csi-k8s-1.21.yaml
+

+
+

Note

+

Latest supported CSI driver version is 2.2.0 for Kubernetes 1.21.

+
+

Kubernetes 1.20

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.1.1/hpe-csi-k8s-1.20.yaml
+

+
+

Note

+

Latest supported CSI driver version is 2.1.1 for Kubernetes 1.20.

+
+

Kubernetes 1.19

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.0.0/hpe-csi-k8s-1.19.yaml
+

+
+

Note

+

Latest supported CSI driver version is 2.0.0 for Kubernetes 1.19.

+
+

Kubernetes 1.18

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.0.0/hpe-csi-k8s-1.18.yaml
+

+
+

Note

+

Latest supported CSI driver version is 2.0.0 for Kubernetes 1.18.

+
+

Kubernetes 1.17

+

kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v1.4.0/hpe-csi-k8s-1.17.yaml
+

+
+

Note

+

Latest supported CSI driver version is 1.4.0 for Kubernetes 1.17.

+
+

Kubernetes 1.16

+ +
+

Note

+

Latest supported CSI driver version is 1.3.0 for Kubernetes 1.16.

+
+

Kubernetes 1.15

+ +
+

Note

+

Latest supported CSI driver version is 1.3.0 for Kubernetes 1.15.

+
+

Kubernetes 1.14

+ +
+

Note

+

Latest supported CSI driver version is 1.2.0 for Kubernetes 1.14.

+
+

Kubernetes 1.13

+ +
+

Note

+

Latest supported CSI driver version is 1.1.0 for Kubernetes 1.13.

+
+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/csi_driver/metrics.html b/csi_driver/metrics.html new file mode 100644 index 00000000..fd1e820b --- /dev/null +++ b/csi_driver/metrics.html @@ -0,0 +1,466 @@ + + + + + + + + + + + + + + + + + + Metrics - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • HPE CSI DRIVER FOR KUBERNETES »
  • +
  • Metrics
  • +
  • +
  • +
+
+
+
+
+ +

HPE CSI Info Metrics Provider for Prometheus

+

The HPE CSI Driver for Kubernetes may be accompanied by a Prometheus metrics endpoint to provide metadata about the volumes provisioned by the CSI driver and supporting backends. It's conventionally deployed with HPE Storage Array Exporter for Prometheus to provide a richer set of metrics from the backend storage systems.

+ +

Metrics Provided

+

The exporter provides two metrics, "hpestoragecsi_volume_info" and "hpestoragecsi_backend_info".

+

Volume Info

+ + + + + + + + + + + + + + + + + +
MetricTypeDescriptionValue
hpestoragecsi_volume_infoGaugeIndicates a volume whose provisioner is the HPE CSI Driver.1
+

This metric includes the following labels.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LabelDescription
backendBackend hostname or IP address as defined in the Secret.
pvPersistentVolume name.
pvcPersistentVolumeClaim name.
pvc_namespacePersistentVolumeClaim Namespace.
storage_classStorageClass used to provision the PersistentVolume.
volumeVolume handle used by the backend storage system.
+

Backend Info

+ + + + + + + + + + + + + + + + + +
MetricTypeDescriptionValue
hpestoragecsi_backend_infoGaugeIndicates a storage system for which the HPE CSI driver is a provisioner.1
+

This metric includes the following labels.

+ + + + + + + + + + + + + +
LabelDescription
backendBackend hostname or IP address as defined in the Secret.
+

Deployment

+

The exporter may be installed either via Helm or through YAML manifests with the object definitions. It's recommended to use Helm as it's more convenient to manage the configuration of the deployment.

+
+

Note

+

It's recommended to add a "cluster" target label to the deployment. The label is used in the provided Grafana dashboards.

+
+

Helm

+

The Helm chart is available on Artifact Hub. Instructions on how to manage and install the chart is available within the chart documentation.

+ +
+

Note

+

It's highly recommended to install the CSI Info Metrics Provider with Helm.

+
+

Rancher

+

Since Rancher v2.7 and HPE CSI Driver for Kubernetes v2.3.0 it's possible to install the HPE CSI Info Metrics Provider through the Apps interface in Rancher to use with Rancher Monitoring. Please see the Rancher partner page for more information.

+

Advanced Install

+

Before beginning an advanced install, determine how Prometheus will be deployed on the Kubernetes cluster as it will dictate how the scrape target will be configured with either a Service annotation or a ServiceMonitor CRD.

+

Start by downloading the manifest, which needs to be modified before applying to the cluster.

+

Version 1.0.3

+

Supports HPE CSI Driver for Kubernetes 2.0.0 and later.

+

wget https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-info-metrics/v1.0.3/hpe-csi-info-metrics.yaml
+

+

Optional ServiceMonitor definition:

+

wget https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-info-metrics/v1.0.3/hpe-csi-info-metrics-service-monitor.yaml
+

+

Configuring Advanced Install

+

Update the main container parameters and optionally add service labels and annotations.

+

In the "hpe-csi-info-metrics" Deployment at .spec.template.spec.containers[0].args in "hpe-csi-info-metrics.yaml":

+

          args:
+            - "--telemetry.addr=:9099"
+            - "--telemetry.path=/metrics"
+            # IMPORTANT: Uncomment this argument to confirm your
+            # acceptance of the HPE End User License Agreement at
+            # https://www.hpe.com/us/en/software/licensing.html
+            #- "--accept-eula"
+

+

Remove the # in front of --accept-eula to accept the HPE license restrictions.

+

In the "hpe-csi-info-metrics-service" Service:

+

metadata:
+  name: hpe-csi-info-metrics-service
+  namespace: hpe-storage
+  labels:
+    app: hpe-csi-info-metrics
+    # Optionally add labels, for example to be included in Prometheus
+    # metrics via a targetLabels setting in a ServiceMonitor spec
+    #cluster: my-cluster
+  # Optionally add annotations, for example to configure it as a
+  # scrape target when using the Prometheus Helm chart's default
+  # configuration.
+  #annotations:
+  #  "prometheus.io/scrape": "true"
+

+
    +
  • Apply and uncomment any custom labels. It's recommended to use a "cluster" label to use the provided Grafana dashboards.
  • +
  • If Prometheus has been deployed without the Operator, uncomment the annotation.
  • +
+

Apply the manifest:

+

kubectl apply -f hpe-csi-info-metrics.yaml
+

+

Optionally, if using the Prometheus Operator, add any additional labels in "hpe-csi-info-metrics-service-monitor.yaml":

+

  # Corresponding labels on the CSI Info Metrics service are added to
+  # the scraped metrics
+  #targetLabels:
+  #  - cluster
+

+

Apply the manifest:

+

kubectl apply -f hpe-csi-info-metrics-service-monitor.yaml
+

+
+

Pro Tip!

+

Avoid hand editing manifests by using the Helm chart.

+
+

Grafana Dashboards

+

Example Grafana dashboards, provided as is, are hosted on grafana.com.

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/csi_driver/monitor.html b/csi_driver/monitor.html new file mode 100644 index 00000000..6e486eee --- /dev/null +++ b/csi_driver/monitor.html @@ -0,0 +1,322 @@ + + + + + + + + + + + + + + + + + + Pod Monitor - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • HPE CSI DRIVER FOR KUBERNETES »
  • +
  • Pod Monitor
  • +
  • +
  • +
+
+
+
+
+ +

Introduction

+

The HPE CSI Driver for Kubernetes includes a Kubernetes Pod Monitor. Specifically it looks for Pods with the label monitored-by: hpe-csi and has NodeLost status set on them. This usually occurs if a node becomes unresponsive or partioned due to a network outage. The Pod Monitor will delete the affected Pod and associated HPE CSI Driver VolumeAttachment to allow Kubernetes to reschedule the workload on a healthy node.

+ +

The Pod Monitor is mandatory and automatically applied for the RWX server Deployment managed by the HPE CSI Driver. It may be used for any Pods on the Kubernetes cluster to perform a more graceful automatic recovery rather than performing a manual intervention to resurrect stuck Pods.

+

CSI Driver Parameters

+

The Pod Monitor is part of the "hpe-csi-controller" Deployment served by the "hpe-csi-driver" container. It's by default enabled and the Pod Monitor interval is set to 30 seconds.

+

Edit the CSI driver deployment to change the interval or disable the Pod Monitor.

+

kubectl edit -n hpe-storage deploy/hpe-csi-controller
+

+

The parameters that control the "hpe-csi-driver" are the following:

+

        - --pod-monitor
+        - --pod-monitor-interval=30
+

+

Pod Inclusion

+

Enable the Pod Monitor for a single replica Deployment by labeling the Pod (assumes an existing PVC name "my-pvc" exists).

+

apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: my-app
+  labels:
+    app: my-app
+spec:
+  replicas: 1
+  strategy:
+    type: Recreate
+  selector:
+    matchLabels:
+      app: my-app
+  template:
+    metadata:
+      labels:
+        monitored-by: hpe-csi
+        app: my-app
+    spec:
+      containers:
+      - image: busybox
+        name: busybox
+        command:
+          - "sleep"
+          - "4800"
+        volumeMounts:
+        - mountPath: /data
+          name: my-vol
+      volumes:
+      - name: my-vol
+        persistentVolumeClaim:
+          claimName: my-pvc
+

+
+

Danger

+

It's imperative that failure scenarios that are being mitigated for the application are properly tested before put into production. It's up to the CSP to fence the PersistentVolume attached to an isolated node when a new "NodePublish" request comes in. Node isolation is the most dangerous scenario as the workload continues to run on the node when disconnected from the outside world. Simply shutdown the kubelet to test this scenario and ensure the block device become inaccessible to the isolated node.

+
+

Limitations

+
    +
  • Kubernetes provide automatic recovery for your applications, not high availability. Expect applications to take minutes (up to 8 minutes with the default tolerations for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable) to fully recover during a node failure or network partition using the Pod Monitor for Pods with PersistentVolumeClaims.
  • +
  • HPE CSI Driver 2.3.0 to 2.4.1 are inffective on StatefulSets due to an upstream API update that did not take the force flag into account.
  • +
  • Using the Pod Monitor on a workload controller besides a Deployment configured with .spec.strategy.type "Recreate" or a StatefulSet is unsupported. The consequence of using other settings and controllers may have undesired side effects such as rendering "multi-attach" errors for PersistentVolumeClaims and may delay recovery.
  • +
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/csi_driver/operations.html b/csi_driver/operations.html new file mode 100644 index 00000000..92fb7db3 --- /dev/null +++ b/csi_driver/operations.html @@ -0,0 +1,890 @@ + + + + + + + + + + + + + + + + + + Auxiliary Operations - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • HPE CSI DRIVER FOR KUBERNETES »
  • +
  • Auxiliary Operations
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

The documentation in this section illustrates officially HPE supported procedures to perform maintenance tasks on the CSI driver outside the scope of deploying and uninstalling the driver.

+ +

Migrate Encrypted Volumes

+

Persistent volumes created with v2.1.1 or below using volume encryption, the CSI driver use LUKS2 (WikiPedia: Linux Unified Key Setup) and can't expand the PersistentVolumeClaim. With v2.2.0 and above, LUKS1 is used and the CSI driver is capable of expanding the PVC.

+

This procedure migrate (copy) data from LUKS2 to LUKS1 PVCs to allow expansion of the volume.

+
+

Note

+

It's not a limitation of LUKS2 to not allow expansion but rather how the CSI driver interact with the host.

+
+

Assumptions

+

These are the assumptions made throughout this procedure.

+
    +
  • Data to be migrated has a good backup to restore to, not just a snapshot.
  • +
  • HPE CSI Driver for Kubernetes v2.3.0 or later installed.
  • +
  • Worker nodes with access to the Quay registry and SCOD.
  • +
  • Access to the commands kubectl, curl, jq and yq.
  • +
  • Cluster privileges to manipulate PersistentVolumes.
  • +
  • None of the commands executed should return errors or have non-zero exit codes.
  • +
  • Only ReadWriteOnce PVCs are covered.
  • +
  • No custom PVC annotations.
  • +
+
+

Tip

+

There are many different ways to copy PVCs. These steps outlines and uses one particular method developed and tested by HPE and similar workflows may be applied with other tools and procedures.

+
+

Prepare the Workload and Persistent Volume Claims

+

First, identify the PersistentVolume to migrate from and set shell variables.

+

export OLD_SRC_PVC=<insert your existing PVC name here>
+export OLD_SRC_PV=$(kubectl get pvc -o json | \
+       jq -r ".items[] | \
+        select(.metadata.name | \
+        test(\"${OLD_SRC_PVC}\"))".spec.volumeName)
+

+
+

Important

+

Ensure these shell variables are set at all times.

+
+

In order to copy data out of a PVC, the running workload needs to be disassociated with the PVC. It's not possible to scale the replicas to zero, the exception being ReadWriteMany PVCs which could lead to data inconsistency problems. These procedures assumes application consistency by having the workload shut down.

+

It's out of scope for this procedure to demonstrate how to shut down a particular workload. Ensure there are no volumeattachments associated with the PersistentVolume.

+

kubectl get volumeattachment -o json | \
+ jq -r ".items[] | \
+  select(.spec.source.persistentVolumeName | \
+  test(\"${OLD_SRC_PV}\"))".spec.source
+

+
+

Tip

+

For large volumeMode: Filesystem PVCs where copying data may take days, it's recommended to use the Optional Workflow with Filesystem Persistent Volume Claims that utilizes the PVC dataSource capability.

+
+

Create a new Persistent Volume Claim and Update Retain Policies

+

Create a new PVC named "new-pvc" with enough space to host the data from the old source PVC.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: new-pvc
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 32Gi
+  volumeMode: Filesystem
+

+
+

Important

+

If the source PVC is a raw block volume, ensure volumeMode: Block is set on the new PVC.

+
+

Edit and set the shell variables for the newly created PVC.

+

export NEW_DST_PVC_SIZE=32Gi
+export NEW_DST_PVC_VOLMODE=Filesystem
+export NEW_DST_PVC=new-pvc
+export NEW_DST_PV=$(kubectl get pvc -o json | \
+       jq -r ".items[] | \
+       select(.metadata.name | \
+       test(\"${NEW_DST_PVC}\"))".spec.volumeName)
+

+
+

Hint

+

The PVC name "new-pvc" is a placeholder name. When the procedure is done, the PVC will have its original name restored.

+
+
Important Validation Steps
+

At this point, there should be six shell variables declared. Example:

+

$ env | grep _PV
+NEW_DST_PVC_SIZE=32Gi
+NEW_DST_PVC=new-pvc
+OLD_SRC_PVC=old-pvc <-- This should be the original name of the PVC
+NEW_DST_PVC_VOLMODE=Filesystem
+NEW_DST_PV=pvc-ad7a05a9-c410-4c63-b997-51fb9fc473bf
+OLD_SRC_PV=pvc-ca7c2f64-641d-4265-90f8-4aed888bd2c5
+

+

Regardless of the retainPolicy set in the StorageClass, ensure the persistentVolumeReclaimPolicy is set to "Retain" for both PVs.

+

kubectl patch pv/${OLD_SRC_PV} pv/${NEW_DST_PV} \
+ -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+

+
+

Data Loss Warning

+

It's EXTREMELY important no errors are returned from the above command. It WILL lead to data loss.

+
+

Validate the "persistentVolumeReclaimPolicy".

+

kubectl get pv/${OLD_SRC_PV} pv/${NEW_DST_PV} -o json | \
+ jq -r ".items[] | \
+  select(.metadata.name)".spec.persistentVolumeReclaimPolicy
+

+
+

Important

+

The above command should output nothing but two lines with the word "Retain" on it.

+
+

Copy Persistent Volume Claim and Reset

+

In this phase, the data will be copied from the original PVC to the new PVC with a Job submitted to the cluster. Different tools are being used to perform the copy operation, ensure to pick the correct volumeMode.

+

PVCs with volumeMode: Filesystem

+

curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy-file.yaml | \
+  yq "( select(.spec.template.spec.volumes[] | \
+    select(.name == \"src-pv\") | \
+    .persistentVolumeClaim.claimName = \"${OLD_SRC_PVC}\")
+    " | kubectl apply -f-
+

+

Wait for the Job to complete.

+

kubectl get job.batch/pvc-copy-file -w
+

+

Once the Job has completed, validate exit status and log files.

+

kubectl get job.batch/pvc-copy-file -o jsonpath='{.status.succeeded}'
+kubectl logs job.batch/pvc-copy-file
+

+

Delete the Job from the cluster.

+

kubectl delete job.batch/pvc-copy-file
+

+

Proceed to restart the workload.

+

PVCs with volumeMode: Block

+

curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy-block.yaml | \
+  yq "( select(.spec.template.spec.volumes[] | \
+    select(.name == \"src-pv\") | \
+    .persistentVolumeClaim.claimName = \"${OLD_SRC_PVC}\")
+    " | kubectl apply -f-
+

+

Wait for the Job to complete.

+

kubectl get job.batch/pvc-copy-block -w
+

+
+

Hint

+

Data is copied block for block, verbatim, regardless of how much application data is stored in the block devices.

+
+

Once the Job has completed, validate exit status and log files.

+

kubectl get job.batch/pvc-copy-block -o jsonpath='{.status.succeeded}'
+kubectl logs job.batch/pvc-copy-block
+

+

Delete the Job from the cluster.

+

kubectl delete job.batch/pvc-copy-block
+

+

Proceed to restart the workload.

+

Restart the Workload

+

This step requires both the old source PVC and the new destination PVC to be deleted. Once again, ensure the correct persistentVolumeReclaimPolicy is set on the PVs.

+

kubectl get pv/${OLD_SRC_PV} pv/${NEW_DST_PV} -o json | \
+ jq -r ".items[] | \
+  select(.metadata.name)".spec.persistentVolumeReclaimPolicy
+

+
+

Important

+

The above command should output nothing but two lines with the word "Retain" on it, if not revisit Important Validation Steps to apply the policy and ensure environment variables are set correctly.

+
+

Delete the PVCs.

+

kubectl delete pvc/${OLD_SRC_PVC} pvc/${NEW_DST_PVC}
+

+

Next, allow the new PV to be reclaimed.

+

kubectl patch pv ${NEW_DST_PV} -p '{"spec":{"claimRef": null }}'
+

+

Next, create a PVC with the old source name and ensure it matches the size of the new destination PVC.

+

curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy.yaml | \
+  yq ".spec.volumeName = \"${NEW_DST_PV}\" | \
+    .metadata.name = \"${OLD_SRC_PVC}\" | \
+    .spec.volumeMode = \"${NEW_DST_PVC_VOLMODE}\" | \
+    .spec.resources.requests.storage = \"${NEW_DST_PVC_SIZE}\" \
+    " | kubectl apply -f-
+

+

Verify the new PVC is "Bound" to the correct PV.

+

kubectl get pvc/${OLD_SRC_PVC} -o json | \
+  jq -r ". | \
+    select(.spec.volumeName == \"${NEW_DST_PV}\").metadata.name"
+

+

If the command is successful, it should output your original PVC name.

+

At this point the original workload should be deployed, verified and resumed.

+

Optionally, the old source PV may be removed.

+

kubectl delete pv/${OLD_SRC_PV}
+

+

Optional Workflow with Filesystem Persistent Volume Claims

+

If there's a lot of content (millions of files, terabytes of data) that needs to be transferred in a volumeMode: Filesystem PVC it's recommended to transfer content incrementally. This is achieved by substituting the "old-pvc" with a dataSource clone of the running workload and perform the copy from the clone onto the "new-pvc".

+

After the first transfer completes, the copy job may be recreated as many times as needed with a fresh clone of "old-pvc" until the downtime window has shrunk to an acceptable duration. For the final transfer, the actual source PVC will be used instead of the clone.

+

This is an example PVC.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: clone-of-pvc
+spec:
+  dataSource:
+    name: this-is-the-current-prod-pvc
+    kind: PersistentVolumeClaim
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 32Gi
+

+
+

Tip

+

The capacity of the dataSource clone must match the original PVC.

+
+

Enabling and setting up the CSI snapshotter and related CRDs is not necessary but it's recommended to be familiar with using CSI snapshots.

+

Upgrade NFS Servers

+

In the event the CSI driver contains updates to the NFS Server Provisioner, any running NFS server needs to be updated manually.

+

Upgrade to v2.5.0

+

Any prior deployed NFS servers may be upgraded to v2.5.0.

+

Upgrade to v2.4.2

+

No changes to NFS Server Provisioner image between v2.4.1 and v2.4.2.

+

Upgrade to v2.4.1

+

Any prior deployed NFS servers may be upgraded to v2.4.1.

+
+

Important

+

With v2.4.0 and onwards the NFS servers are deployed with default resource limits and in v2.5.0 resource requests were added. Those won't be applied on running NFS servers, only new ones.

+
+

Assumptions

+
    +
  • HPE CSI Driver or Operator v2.4.1 installed.
  • +
  • All running NFS servers are running in the "hpe-nfs" Namespace.
  • +
  • Worker nodes with access to the Quay registry and SCOD.
  • +
  • Access to the commands kubectl, yq and curl.
  • +
  • Cluster privileges to manipulate resources in the "hpe-nfs" Namespace.
  • +
  • None of the commands executed should return errors or have non-zero exit codes.
  • +
+
+

Seealso

+

If NFS Deployments are scattered across Namespaces, use the Validation steps to find where they reside.

+
+

Patch Running NFS Servers

+

When patching the NFS Deployments, the Pods will restart and cause a pause in I/O for the NFS clients with active mounts. The clients will recover gracefully once the NFS Pod is running again.

+

Patch all NFS Deployments with the following.

+

curl -s https://scod.hpedev.io/csi_driver/examples/operations/patch-nfs-server-2.5.0.yaml | \
+  kubectl patch -n hpe-nfs \
+  $(kubectl get deploy -n hpe-nfs -o name) \
+  --patch-file=/dev/stdin
+

+
+

Tip

+

If it's desired to patch one NFS Deployment at a time, replace the shell substituion with a Deployment name.

+
+

Validation

+

This command will list all "hpe-nfs" Deployments across the entire cluster. Each Deployment should be using v3.0.5 of the "nfs-provisioner" image after the uprade is complete.

+

kubectl get deploy -A -o yaml | \
+  yq -r '.items[] | [] + { "Namespace": select(.spec.template.spec.containers[].name == "hpe-nfs").metadata.namespace, "Deployment": select(.spec.template.spec.containers[].name == "hpe-nfs").metadata.name, "Image": select(.spec.template.spec.containers[].name == "hpe-nfs").spec.template.spec.containers[].image }'
+

+
+

Note

+

The above line is very long.

+
+

Manual Node Configuration

+

With the release of HPE CSI Driver v2.4.0 it's possible to completely disable the node conformance and node configuration performed by the CSI node driver at startup. This transfers the responsibilty from the HPE CSI Driver to the Kubernetes cluster administrator to ensure worker nodes boot with a supported configuration.

+
+

Important

+

This feature is mainly for users who require 100% control of the worker nodes.

+
+

Stages of Initialization

+

There are two stages of initialization the administrator can control through parameters in the Helm chart.

+

disableNodeConformance

+

The node conformance runs with the entrypoint of the node driver container. The conformance inserts and runs a systemd service on the node that installs all required packages on the node to allow nodes to attach block storage devices and mount NFS exports. It starts all the required services and configure an important udev rule on the worker node.

+

This flag was intended to allow administrators to run the CSI driver on nodes with an unsupported or unconfigured package manager.

+

If node conformance needs to be disabled for any reason, these packages and services needs to be installed and running prior to installing the HPE CSI Driver:

+
    +
  • iSCSI (not necessary when using FC)
  • +
  • Multipath
  • +
  • XFS programs/utilities
  • +
  • NFSv4 client
  • +
+

Package names and services vary greatly between different Linux distributions and it's the system administrator's duty to ensure these are available to the HPE CSI Driver.

+

disableNodeConfiguration

+

When disabling node configuration the CSI node driver will not touch the node at all. Besides indirectly disabling node conformance, all attempts to write configuration files or manipulate services during runtime are disabled.

+

Mandatory Configuration

+

These steps are REQUIRED for disabling either node configuration or conformance.

+

On each current and future worker node in the cluster:

+

# Don't let udev automatically scan targets(all luns) on Unit Attention.
+# This will prevent udev scanning devices which we are attempting to remove.
+
+if [ -f /lib/udev/rules.d/90-scsi-ua.rules ]; then
+    sed -i 's/^[^#]*scan-scsi-target/#&/' /lib/udev/rules.d/90-scsi-ua.rules
+    udevadm control --reload-rules
+fi
+

+

iSCSI Configuration

+

Skip this step if only Fibre Channel is being used. This step is only required when node configuration is disabled.

+

iscsid.conf

+

This example is taken from a Rocky Linux 9.2 node with the HPE parameters applied. Certain parameters may differ for other distributions of either iSCSI or the host OS.

+
+

Note

+

The location of this file varies between Linux and iSCSI distributions.

+
+

Ensure iscsid is stopped.

+

systemctl stop iscsid
+

+

Download: /etc/iscsi/iscsid.conf

+

iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket
+node.startup = manual
+node.leading_login = No
+node.session.timeo.replacement_timeout = 10
+node.conn[0].timeo.login_timeout = 15
+node.conn[0].timeo.logout_timeout = 15
+node.conn[0].timeo.noop_out_interval = 5
+node.conn[0].timeo.noop_out_timeout = 10
+node.session.err_timeo.abort_timeout = 15
+node.session.err_timeo.lu_reset_timeout = 30
+node.session.err_timeo.tgt_reset_timeout = 30
+node.session.initial_login_retry_max = 8
+node.session.cmds_max = 512
+node.session.queue_depth = 256
+node.session.xmit_thread_priority = -20
+node.session.iscsi.InitialR2T = No
+node.session.iscsi.ImmediateData = Yes
+node.session.iscsi.FirstBurstLength = 262144
+node.session.iscsi.MaxBurstLength = 16776192
+node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
+node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
+discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
+node.conn[0].iscsi.HeaderDigest = None
+node.session.nr_sessions = 1
+node.session.reopen_max = 0
+node.session.iscsi.FastAbort = Yes
+node.session.scan = auto
+

+
+

Pro tip!

+

When nodes are provisioned from some sort of templating system with iSCSI pre-installed, it's notoriously common that nodes are provisioned with identical IQNs. This will lead to device attachment problems that aren't obvious to the user. Make sure each node has a unique IQN.

+
+

Ensure iscsid is running and enabled:

+

systemctl enable --now iscsid
+

+
+

Seealso

+

Some Linux distributions requires the iscsi_tcp kernel module to be loaded. Where kernel modules are loaded varies between Linux distributions.

+
+

Multipath Configuration

+

This step is only required when node configuration is disabled.

+

multipath.conf

+

The defaults section of the configuration file is merely a preference, make sure to leave the device and blacklist stanzas intact when potentially adding more entries from foreign devices.

+
+

Note

+

The location of this file varies between Linux and iSCSI distributions.

+
+

Ensure multipathd is stopped.

+

systemctl stop multipathd
+

+

Download: /etc/multipath.conf

+

defaults {
+    user_friendly_names yes
+    find_multipaths     no
+    uxsock_timeout      10000
+}
+blacklist {
+    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
+    devnode "^hd[a-z]"
+    device {
+        product ".*"
+        vendor  ".*"
+    }
+}
+blacklist_exceptions {
+    property "(ID_WWN|SCSI_IDENT_.*|ID_SERIAL)"
+    device {
+        vendor  "Nimble"
+        product "Server"
+    }
+    device {
+        product "VV"
+        vendor  "3PARdata"
+    }
+    device {
+        vendor  "TrueNAS"
+        product "iSCSI Disk"
+    }
+    device {
+        vendor  "FreeNAS"
+        product "iSCSI Disk"
+    }
+}
+devices {
+    device {
+        product              "Server"
+        rr_min_io_rq         1
+        dev_loss_tmo         infinity
+        path_checker         tur
+        rr_weight            uniform
+        no_path_retry        30
+        path_selector        "service-time 0"
+        failback             immediate
+        fast_io_fail_tmo     5
+        vendor               "Nimble"
+        hardware_handler     "1 alua"
+        path_grouping_policy group_by_prio
+        prio                 alua
+    }
+    device {
+        path_grouping_policy group_by_prio
+        path_checker         tur
+        rr_weight            "uniform"
+        prio                 alua
+        failback             immediate
+        hardware_handler     "1 alua"
+        no_path_retry        18
+        fast_io_fail_tmo     10
+        path_selector        "round-robin 0"
+        vendor               "3PARdata"
+        dev_loss_tmo         infinity
+        detect_prio          yes
+        features             "0"
+        rr_min_io_rq         1
+        product              "VV"
+    }
+    device {
+        path_selector        "queue-length 0"
+        rr_weight            priorities
+        uid_attribute        ID_SERIAL
+        vendor               "TrueNAS"
+        product              "iSCSI Disk"
+        path_grouping_policy group_by_prio
+    }
+    device {
+        path_selector        "queue-length 0"
+        hardware_handler     "1 alua"
+        rr_weight            priorities
+        uid_attribute        ID_SERIAL
+        vendor               "FreeNAS"
+        product              "iSCSI Disk"
+        path_grouping_policy group_by_prio
+    }
+}
+

+

Ensure multipathd is running and enabled:

+

systemctl enable --now multipathd
+

+

Important Considerations

+

While both disabling conformance and configuration parameters lends itself to a more predictable behaviour when deploying nodes from templates with less runtime configuration, it's still not a complete solution for having immutable nodes. The CSI node driver creates a unique identity for the node and stores it in /etc/hpe-storage/node.gob. This file must persist across reboots and redeployments of the node OS image. Immutable Linux distributions such as CoreOS persist the /etc directory, some don't.

+ + +

Expose NFS Services Outside of the Kubernetes Cluster

+

In certain situations it's practical to expose the NFS exports outside the Kubernetes cluster to allow external applications to access data as part of an ETL (Extract, Transform, Load) pipeline or similar.

+

Since this is an untested feature with questionable security standards, HPE does not recommend using this facility in production at this time. Reach out to your HPE account representative if this is a critical feature for your workloads.

+
+

Danger

+

The exports on the NFS servers does not have any network Access Control Lists (ACL) without root squash. Anyone with an NFS client that can reach the load balancer IP address have full access to the filesystem.

+
+

From ClusterIP to LoadBalancer

+

The NFS server Service must be transformed into a "LoadBalancer".

+

In this example we'll assume a "RWX" PersistentVolumeClaim named "my-pvc-1" and NFS resources deployed in the default Namespace, "hpe-nfs".

+

Retrieve NFS UUID

+

export UUID=$(kubectl get pvc my-pvc-1 -o jsonpath='{.spec.volumeName}{"\n"}' | awk -Fpvc- '{print $2}')
+

+

Patch the NFS Service:

+

kubectl patch -n hpe-nfs svc/hpe-nfs-${UUID} -p '{"spec":{"type": "LoadBalancer"}}'
+

+

The Service will be assigned an external IP address by the load balancer deployed in the cluster. If there is no load balancer deployed, a MetalLB example is provided below.

+

MetalLB Example

+

Deploying MetalLB is outside the scope of this document. In this example, MetalLB was deployed on OpenShift 4.16 (Kubernetes v1.29) using the Operator provided by Red Hat in the "metallb-system" Namespace.

+

Determine the IP address range that will be assigned to the load balancers. In this example, 192.168.1.40 to 192.168.1.60 is being used. Note that the worker nodes in this cluster already have reachable IP addresses in the 192.168.1.0/24 network, which is a requirement.

+

Create the MetalLB instances, IP address pool and Layer 2 advertisement.

+

---
+apiVersion: metallb.io/v1beta1
+kind: MetalLB
+metadata:
+  name: metallb
+  namespace: metallb-system
+
+---
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+  namespace: metallb-system
+  name: hpe-nfs-servers
+spec:
+  protocol: layer2
+  addresses:
+  - 192.168.1.40-192.168.1.60
+
+---
+apiVersion: metallb.io/v1beta1
+kind: L2Advertisement
+metadata:
+  name: l2advertisement
+  namespace: metallb-system
+spec:
+  ipAddressPools:
+   - hpe-nfs-servers
+

+

Shortly, the external IP address of the NFS Service patched in the previous steps should have an IP address assigned.

+

NAME           TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S) 
+hpe-nfs-UUID   LoadBalancer   172.30.217.203   192.168.1.40    <long list of ports>
+

+

Mount the NFS Server from an NFS Client

+

Mounting the NFS export externally is now possible.

+

As root:

+

mount -t nfs4 192.168.1.40:/export /mnt
+

+
+

Note

+

If the NFS server is rescheduled in the Kubernetes cluster, the load balancer IP address follows, and the client will recover and resume IO after a few minutes.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/csi_driver/standalone_nfs.html b/csi_driver/standalone_nfs.html new file mode 100644 index 00000000..da6407f2 --- /dev/null +++ b/csi_driver/standalone_nfs.html @@ -0,0 +1,417 @@ + + + + + + + + + + + + + + + + + + Standalone NFS Server - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Standalone NFS Server
  • +
  • +
  • +
+
+
+
+
+ +

Standalone NFS Server

+

In certain situations is desirable to run the NFS Server Provisioner image without the dual PersistentVolumeClaim (PVC) semantic in a more static fashion on top of a PVC provisioned by a non-HPE CSI Driver StorageClass.

+
+

Notice

+

Since HPE CSI Driver for Kubernetes v2.4.1, this functionality is built into the CSI driver. See Using a Foreign StorageClass how to use it.

+
+ +

Limitations

+
    +
  • The standalone NFS server is not part of the HPE CSI Driver and should be considered a standalone Kubernetes application altogether. The HPE CSI Driver NFS Server Provisioner NFS servers may co-exist on the same cluster and Namespace without risk of conflict but not recommended.
  • +
  • The Pod Monitor which normally monitors Pods status for the "NodeLost" condition is not included with the standalone NFS server and recovery is at the mercy of the underlying storage platform and driver.
  • +
  • Support is limited on the standalone NFS server and only available to select users.
  • +
+

Prerequisites

+

It's assumed during the creation steps that a Kubernetes cluster is available with enough permissions to deploy privileged Pods with SYS_ADMIN and DAC_READ_SEARCH capabilities. All steps are run in a terminal with kubectl and git in in the path.

+
    +
  • A default StorageClass declared on the cluster
  • +
  • Worker nodes that will serve the NFS exports must be labeled with csi.hpe.com/hpe-nfs: "true"
  • +
  • kubectl and Kubernetes v1.21 or newer
  • +
+

Create a Workspace

+

NFS server configurations are managed with the kustomize templating system. Clone this repository to get started and change working directory.

+

git clone https://github.com/hpe-storage/scod
+cd scod/docs/csi_driver/examples/standalone_nfs
+

+

In the current directory, various manifests and configuration directives exist to deploy and manage NFS servers.

+

Run tree . in the current directory:

+

.
+├── base
+│   ├── configmap.yaml
+│   ├── deployment.yaml
+│   ├── environment.properties
+│   ├── kustomization.yaml
+│   ├── pvc.yaml
+│   ├── service.yaml
+│   └── values.yaml
+└── overlays
+    └── example
+        ├── deployment.yaml
+        ├── environment.properties
+        └── kustomization.yaml
+
+4 directories, 10 files
+

+
+

Important

+

The current directory is now the "home" for the remainder of this guide.

+
+

Create an NFS Server

+

Copy the "example" overlay into a new directory. In the examples "my-server" is used.

+

cp -a overlays/example overlays/my-server
+

+

Edit both "environment.properties" and "kustomization.yaml" in the newly created overlay. Also pay attention to if the remote Pods mounting the NFS export are running as a non-root user, if that's the case, the group ID is needed of those Pods (customizable per NFS server).

+

environment.properties

+

# This is the domain associated with worker node (not inter-cluster DNS)
+CLUSTER_NODE_DOMAIN_NAME=my-domain.example.com
+
+# The size of the backend RWO claim
+PERSISTENCE_SIZE=16Gi
+
+# Default resource limits for the NFS server
+NFS_SERVER_CPU_LIMIT=1
+NFS_SERVER_MEMORY_LIMIT=2Gi
+

+

The "CLUSTER_NODE_DOMAIN_NAME" variable refers to the DNS domain name that the worker node is resolvable in, not the Kubernetes cluster DNS.

+

The "PERSISTENCE_SIZE" is the backend PVC size expressed in the same format accepted by a PVC.

+

Configuring resource limits are optional but recommended for high performance workloads.

+

kustomization.yaml

+

Change the resource prefix in "kustomization.yaml" either with an editor or sed:

+

sed -i"" 's/example-/my-server-/g' overlays/my-server/kustomization.yaml
+

+
+

Seealso

+

If the NFS server needs to be deployed in a different Namespace than the current, edit and uncomment the "namespace" parameter in overlays/my-server/kustomization.yaml.

+
+

Change the default fsGroup

+

The default "fsGroup" is mapped to "nobody" (gid=65534) which allows remote Pods run as the root user to write in the NFS export. This may not be desirable as best practices dictate that Pods should run with a user id larger than 99.

+

To allow user Pods to write in the export, edit overlays/my-server/deployment.yaml and change the "fsGroup" to the corresponding gid running in the remote Pod.

+

apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: hpe-nfs
+spec:
+  template:
+    spec:
+      securityContext:
+        fsGroup: 65534
+        fsGroupChangePolicy: OnRootMismatch
+

+

Deploy the NFS server by issuing kubectl apply -k overlays/my-server:

+

configmap/my-server-hpe-nfs-conf created
+configmap/my-server-local-conf-97898bftbh created
+service/my-server-hpe-nfs created
+persistentvolumeclaim/my-server-hpe-nfs created
+deployment.apps/my-server-hpe-nfs created
+

+

Inspect the resources with kubectl get -k overlays/my-server:

+

NAME                                        DATA   AGE
+configmap/my-server-hpe-nfs-conf            1      59s
+configmap/my-server-local-conf-97898bftbh   2      59s
+
+NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                               AGE
+service/my-server-hpe-nfs   ClusterIP   10.100.200.11   <none>        49000/TCP,2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,111/TCP,111/UDP,662/TCP,662/UDP,875/TCP,875/UDP   59s
+
+NAME                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
+persistentvolumeclaim/my-server-hpe-nfs   Bound    pvc-ae943116-d0af-4696-8b1b-1dcf4316bdc2   18Gi       RWO            vsphere-sc     58s
+
+NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
+deployment.apps/my-server-hpe-nfs   1/1     1            1           59s
+

+

Make a note of the IP address assigned to "service/my-server-hpe-nfs", that is the IP address needed to mount the NFS export.

+
+

Tip

+

If the Kubernetes cluster DNS service is resolvable from the worker node host OS, it possible to use the cluster DNS address to mount the Service, in this example that would be "my-server-hpe-nfs.default.svc.cluster.local".

+
+

Mounting the NFS Server

+

There are two ways to mount the NFS server.

+
    +
  1. Inline declaration of where to find the NFS server and NFS Export
  2. +
  3. Statically creating a PersistentVolume with the NFS server details and mount options and manually claiming the PV with a PVC using the .spec.volumeName parameter
  4. +
+

Inline Declaration

+

This is the most elegant solution as it does not require any intermediary PVC or PV and directly refers to the NFS server within a workload stanza.

+

This is an example from a StatefulSet workload controller having multiple replicas.

+

...
+spec:
+  replicas: 3
+  template:
+    ...
+    spec:
+      containers:
+        volumeMounts:
+        - name: vol
+          mountPath: /vol
+      ...
+      volumes:
+      - name: vol
+        nfs:
+          server: 10.100.200.11
+          path: /export
+

+
+

Important

+

Replace .spec.template.spec.volumes[].nfs.server with IP address from the actual Service IP address and not the examples.

+
+

Static Provisioning

+

Refer to the official Kubernetes documentation for the built-in NFS client on how to perform static provisioning of NFS PVs and PVCs.

+

Expand PVC

+

If the StorageClass and underlying CSI driver supports volume expansion, simply edit overlays/my-server/environment.properties with the new (larger) size and issue kubectl apply -k overlays/my-server to expand the volume.

+

Deleting the NFS Server

+

Ensure no workloads have active mounts against the NFS server Service. If there are, those Pods will be stuck indefinitely.

+

Run kubectl delete -k overlays/my-server:

+

configmap "my-server-hpe-nfs-conf" deleted
+configmap "my-server-local-conf-97898bftbh" deleted
+service "my-server-hpe-nfs" deleted
+persistentvolumeclaim "my-server-hpe-nfs" deleted
+deployment.apps "my-server-hpe-nfs" deleted
+

+
+

Caution

+

Unless the StorageClass "reclaimPolicy" is set to "Retain". The underlying PV will be deleted from the cluster and data needs to be restored from backups if needed.

+
+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/csi_driver/using.html b/csi_driver/using.html new file mode 100644 index 00000000..8d4f875c --- /dev/null +++ b/csi_driver/using.html @@ -0,0 +1,1421 @@ + + + + + + + + + + + + + + + + + + Using - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • HPE CSI DRIVER FOR KUBERNETES »
  • +
  • Using
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

At this point the CSI driver and CSP should be installed and configured.

+
+

Important

+

Most examples below assumes there's a Secret named "hpe-backend" in the "hpe-storage" Namespace. Learn how to add Secrets in the Deployment section.

+
+ +
+

Tip

+

If you're familiar with the basic concepts of persistent storage on Kubernetes and are looking for an overview of example YAML declarations for different object types supported by the HPE CSI driver, visit the source code repo on GitHub.

+
+

PVC Access Modes

+

The HPE CSI Driver for Kubernetes is primarily a ReadWriteOnce (RWO) CSI implementation for block based storage. The CSI driver also supports ReadWriteMany (RWX) and ReadOnlyMany (ROX) using a NFS Server Provisioner. It's enabled by transparently deploying a NFS server for each Persistent Volume Claim (PVC) against a StorageClass where it's enabled, that in turn is backed by a traditional RWO claim. Most of the examples featured on SCOD are illustrated as RWO using block based storage, but many of the examples apply in the majority of use cases.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Access ModeAbbreviationUse Case
ReadWriteOnceRWOFor high performance Pods where access to the PVC is exclusive to one host at a time. May use either block based storage or the NFS Server Provisioner where connectivity to the data fabric is limited to a few worker nodes in the Kubernetes cluster.
ReadWriteOncePodRWOPExclusive access by a single Pod. Not currently supported by the HPE CSI Driver.
ReadWriteManyRWXFor shared filesystems where multiple Pods in the same Namespace need simultaneous access to a PVC across multiple nodes.
ReadOnlyManyROXRead-only representation of RWX.
+
+

ReadWriteOnce and access by multiple Pods

+

Pods that require access to the same "ReadWriteOnce" (RWO) PVC needs to reside on the same node and Namespace by using selectors or affinity scheduling rules applied when deployed. If not configured correctly, the Pod will fail to start and will throw a "Multi-Attach" error in the event log if the PVC is already attached to a Pod that has been scheduled on a different node within the cluster.

+
+

The NFS Server Provisioner is not enabled by the default StorageClass and needs a custom StorageClass. The following sections are tailored to help understand the NFS Server Provisioner capabilities.

+ +

Enabling CSI Snapshots

+

Support for VolumeSnapshotClasses and VolumeSnapshots is available from Kubernetes 1.17+. The snapshot CRDs and the common snapshot controller needs to be installed manually. As per Kubernetes TAG Storage, these should not be installed as part of a CSI driver and should be deployed by the Kubernetes cluster vendor or user.

+

Ensure the snapshot CRDs and common snapshot controller hasn't been installed already.

+

kubectl get crd volumesnapshots.snapshot.storage.k8s.io \
+  volumesnapshotcontents.snapshot.storage.k8s.io \
+  volumesnapshotclasses.snapshot.storage.k8s.io
+

+

Vendors may package, name and deploy the common snapshot controller using their own naming conventions. Run the command below and look for workload names that contain "snapshot".

+

kubectl get sts,deploy -A
+

+

If no prior CRDs or controllers exist, install the snapshot CRDs and common snapshot controller (once per Kubernetes cluster, independent of any CSI drivers).

+
# Kubernetes 1.27-1.30
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v8.0.1 -b hpe-csi-driver-v2.5.0
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
# Kubernetes 1.26-1.29
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v6.3.3 -b hpe-csi-driver-v2.4.2
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
# Kubernetes 1.26-1.29
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v6.3.3 -b hpe-csi-driver-v2.4.1
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
# Kubernetes 1.25-1.28
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v6.2.2 -b hpe-csi-driver-v2.4.0
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
# Kubernetes 1.23-1.26
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v5.0.1 -b hpe-csi-driver-v2.3.0
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+
+
+

Tip

+

The provisioning section contains examples on how to create VolumeSnapshotClass and VolumeSnapshot objects.

+
+

Base StorageClass Parameters

+

Each CSP has its own set of unique parameters to control the provisioning behavior. These examples serve as a base StorageClass example for each version of Kubernetes. See the respective CSP for more elaborate examples.

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  annotations:
+    storageclass.kubernetes.io/is-default-class: "true"
+  name: hpe-standard
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  description: "Volume created by the HPE CSI Driver for Kubernetes"
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+

+
+

Important

+

Replace "hpe-backend" with a Secret relevant to the backend being referenced.

+
+

Common HPE CSI Driver StorageClass parameters across CSPs.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
accessProtocolTextThe access protocol to use when accessing the persistent volume ("fc" or "iscsi"). Default: "iscsi"
chapSecretNameTextName of Secret to use for iSCSI CHAP.
chapSecretNamespaceTextNamespace of Secret to use for iSCSI CHAP.
description1TextText to be added to the volume PV metadata on the backend CSP. Default: ""
csi.storage.k8s.io/fstypeTextFilesystem to format new volumes with. XFS is preferred, ext3, ext4 and btrfs is supported. Defaults to "ext4" if omitted.
fsOwneruserId:groupIdThe user id and group id that should own the root directory of the filesystem.
fsModeOctal digits1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem.
fsCreateOptionsTextA string to be passed to the mkfs command. These flags are opaque to CSI and are therefore not validated. To protect the node, only the following characters are allowed: [a-zA-Z0-9=, \-].
fsRepairBooleanWhen set to "true", if a mount fails and filesystem corruption is detected, this parameter will control if an actual repair will be attempted. Default: "false".

Note: fsRepair is unable to detect or remedy corrupted filesystems that are already mounted. Data loss may occur during the attempt to repair the filesystem.
nfsResourcesBooleanWhen set to "true", requests against the StorageClass will create resources for the NFS Server Provisioner (Deployment, RWO PVC and Service). Required parameter for ReadWriteMany and ReadOnlyMany accessModes. Default: "false"
nfsForeignStorageClassTextProvision NFS servers on PVCs from a different StorageClass. See Using a Foreign StorageClass
nfsNamespaceTextResources are by default created in the "hpe-nfs" Namespace. If CSI VolumeSnapshotClass and dataSource functionality is required on the requesting claim, requesting and backing PVC need to exist in the requesting Namespace. A value of "csi.storage.k8s.io/pvc/namespace" will provision resources in the requesting PVC Namespace.
nfsNodeSelectorTextCustomize the nodeSelector label value for the NFS Pod. The default behavior is to omit the nodeSelector.
nfsMountOptionsTextCustomize NFS mount options for the Pods to the server Deployment. Uses mount command defaults from the node.
nfsProvisionerImageTextCustomize provisioner image for the server Deployment. Default: Official build from "hpestorage/nfs-provisioner" repo
nfsResourceRequestsCpuMTextSpecify CPU requests for the server Deployment in milli CPU. Default: "500m". Example: "4000m"
nfsResourceRequestsMemoryMiTextSpecify memory requests (in megabytes) for the server Deployment. Default: "512Mi". Example: "4096Mi".
nfsResourceLimitsCpuMTextSpecify CPU limits for the server Deployment in milli CPU. Default: "1000m". Example: "4000m"
nfsResourceLimitsMemoryMiTextSpecify memory limits (in megabytes) for the server Deployment. Default: "2048Mi". Example: "500Mi". Recommended minimum: "2048Mi".
hostEncryptionBooleanDirect the CSI driver to invoke Linux Unified Key Setup (LUKS) via the dm-crypt kernel module. Default: "false". See Volume encryption to learn more.
hostEncryptionSecretNameTextName of the Secret to use for the volume encryption. Mandatory if "hostEncryption" is enabled. Default: ""
hostEncryptionSecretNamespaceTextNamespace where to find "hostEncryptionSecretName". Default: ""
+

1 = Parameter is mutable using the CSI Volume Mutator.

+
+

Note

+

All common HPE CSI Driver parameters are optional.

+
+

Enabling iSCSI CHAP

+

Familiarize yourself with the iSCSI CHAP Considerations before proceeding. This section describes how to enable iSCSI CHAP with HPE CSI Driver 2.5.0 and later.

+

Create an iSCSI CHAP Secret. The referenced CHAP account does not need to exist on the storage backend, it will be created by the CSP if it doesn't exist.

+

apiVersion: v1
+kind: Secret
+metadata:
+  name: my-chap-secret
+  namespace: hpe-storage
+stringData:
+  # Up to 64 characters including \-:., must start with an alpha-numeric character.
+  chapUser: "my-chap-user"
+  # Between 12 to 16 alpha-numeric characters.
+  chapPassword: "my-chap-password"
+

+

Once the Secret has been created, there are two methods available to use it depending on the situation, cluster-wide or per StorageClass.

+

Cluster-wide iSCSI CHAP Credentials

+

The cluster-wide iSCSI CHAP credentials will be used by all iSCSI-based PersistentVolumes regardless of backend and StorageClass. The CHAP Secret is simply referenced during install of the HPE CSI Driver for Kubernetes Helm Chart. The Secret and Namespace needs to exist prior to install.

+

Example:

+

helm install my-hpe-csi-driver -n hpe-storage \
+  hpe-storage/hpe-csi-driver \
+  --set iscsi.chapSecretName=my-chap-secret
+

+
+

Important

+

Once a PersistentVolume has been provisioned with cluster-wide iSCSI CHAP credentials it's not possible to switch over to per StorageClass iSCSI CHAP credentials.

If CSI driver 2.4.2 or earlier has been used, cluster-wide iSCSI CHAP credentials is the only way to provide the credentials for volumes provisioned with 2.4.2 or earlier.

+
+

Per StorageClass iSCSI CHAP Credentials

+

The CHAP Secret may be referenced in a StorageClass.

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  annotations:
+    storageclass.kubernetes.io/is-default-class: "true"
+  name: hpe-standard
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  description: "Volume created by the HPE CSI Driver for Kubernetes"
+  chapSecretNamespace: hpe-storage
+  chapSecretName: my-chap-secret
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+

+
+

Warning

+

The iSCSI CHAP credentials are in reality per iSCSI Target. Do NOT create multiple StorageClasses referencing different CHAP Secrets with different credentials for the same backend. It will result in a data outage with conflicting sessions.

Ensure the same Secret is referenced in all StorageClasses using a particular backend.

+
+

Provisioning Concepts

+

These instructions are provided as an example on how to use the HPE CSI Driver with a CSP supported by HPE.

+ +
+

New to Kubernetes?

+

There's a basic tutorial of how dynamic provisioning of persistent storage on Kubernetes works in the Video Gallery.

+
+

Create a PersistentVolumeClaim from a StorageClass

+

The below YAML declarations are meant to be created with kubectl create. Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this:

+

kubectl create -f-
+< paste the YAML >
+^D (CTRL + D)
+

+

To get started, create a StorageClass API object referencing the CSI driver Secret relevant to the backend.

+

These examples are for Kubernetes 1.15+

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-scod
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  description: "Volume created by the HPE CSI Driver for Kubernetes"
+  accessProtocol: iscsi
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+

+

Create a PersistentVolumeClaim. This object declaration ensures a PersistentVolume is created and provisioned on your behalf, make sure to reference the correct .spec.storageClassName:

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-first-pvc
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 32Gi
+  storageClassName: hpe-scod
+

+
+

Note

+

In most environments, there is a default StorageClass declared on the cluster. In such a scenario, the .spec.storageClassName can be omitted. The default StorageClass is controlled by an annotation: .metadata.annotations.storageclass.kubernetes.io/is-default-class set to either "true" or "false".

+
+

After the PersistentVolumeClaim has been declared, check that a new PersistentVolume is created based on your claim:

+

kubectl get pv
+NAME              CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM                STORAGECLASS AGE
+pvc-13336da3-7... 32Gi     RWO          Delete         Bound  default/my-first-pvc hpe-scod     3s
+

+

The above output means that the HPE CSI Driver successfully provisioned a new volume. The volume is not attached to any node yet. It will only be attached to a node if a scheduled workload requests the PersistentVolumeClaim. Now, let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted according to the specification.

+

kind: Pod
+apiVersion: v1
+metadata:
+  name: my-pod
+spec:
+  containers:
+    - name: pod-datelog-1
+      image: nginx
+      command: ["bin/sh"]
+      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+      volumeMounts:
+        - name: export1
+          mountPath: /data
+    - name: pod-datelog-2
+      image: debian
+      command: ["bin/sh"]
+      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+      volumeMounts:
+        - name: export1
+          mountPath: /data
+  volumes:
+    - name: export1
+      persistentVolumeClaim:
+        claimName: my-first-pvc
+

+

Check if the Pod is running successfully.

+

kubectl get pod my-pod
+NAME        READY   STATUS    RESTARTS   AGE
+my-pod      2/2     Running   0          2m29s
+

+
+

Tip

+

A simple Pod does not provide any automatic recovery if the node the Pod is scheduled on crashes or become unresponsive. Please see the official Kubernetes documentation for different workload types that provide automatic recovery. A shortlist of recommended workload types that are suitable for persistent storage is available in this blog post and best practices are outlined in this blog post.

+
+

Ephemeral Inline Volumes

+

It's possible to declare a volume "inline" a Pod specification. The volume is ephemeral and only persists as long as the Pod is running. If the Pod gets rescheduled, deleted or upgraded, the volume is deleted and a new volume gets provisioned if it gets restarted.

+

Ephemeral inline volumes are not associated with a StorageClass, hence a Secret needs to be provided inline with the volume.

+
+

Warning

+

Allowing user Pods to access the CSP Secret gives them the same privileges on the backend system as the HPE CSI Driver.

+
+

There are two ways to declare the Secret with ephemeral inline volumes, either the Secret is in the same Namespace as the workload being declared or it resides in a foreign Namespace.

+

Local Secret:

+

apiVersion: v1
+kind: Pod
+metadata:
+  name: my-pod-inline-mount-1
+spec:
+  containers:
+    - name: pod-datelog-1
+      image: nginx
+      command: ["bin/sh"]
+      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+      volumeMounts:
+        - name: my-volume-1
+          mountPath: /data
+  volumes:
+    - name: my-volume-1
+      csi:
+       driver: csi.hpe.com
+       nodePublishSecretRef:
+         name: hpe-backend
+       fsType: ext3
+       volumeAttributes:
+         csi.storage.k8s.io/ephemeral: "true"
+         accessProtocol: "iscsi"
+         size: "5Gi"
+

+

Foreign Secret:

+

apiVersion: v1
+kind: Pod
+metadata:
+  name: my-pod-inline-mount-2
+spec:
+  containers:
+    - name: pod-datelog-1
+      image: nginx
+      command: ["bin/sh"]
+      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+      volumeMounts:
+        - name: my-volume-1
+          mountPath: /data
+  volumes:
+    - name: my-volume-1
+      csi:
+       driver: csi.hpe.com
+       fsType: ext3
+       volumeAttributes:
+         csi.storage.k8s.io/ephemeral: "true"
+         inline-volume-secret-name: hpe-backend
+         inline-volume-secret-namespace: hpe-storage
+         accessProtocol: "iscsi"
+         size: "7Gi"
+

+

The parameters used in the examples are the bare minimum required parameters. Any parameters supported by the HPE CSI Driver and backend CSP may be used for ephemeral inline volumes. See the base StorageClass parameters or the respective CSP being used.

+
+

Seealso

+

For more elaborate use cases around ephemeral inline volumes, check out the tutorial on HPE Developer: Using Ephemeral Inline Volumes on Kubernetes

+
+

Raw Block Volumes

+

The default volumeMode for a PersistentVolumeClaim is Filesystem. If a raw block volume is desired, volumeMode needs to be set to Block. No filesystem will be created. Example:

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-pvc-block
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 32Gi
+  storageClassName: hpe-scod
+  volumeMode: Block
+

+
+

Note

+

The accessModes may be set to ReadWriteOnce, ReadWriteMany or ReadOnlyMany. It's expected that the application handles read/write IO, volume locking and access in the event of concurrent block access from multiple nodes. Consult the Alletra 6000 CSP documentation if using ReadWriteMany raw block volumes with FC on Nimble, Alletra 5000 or 6000.

+
+

Mapping the device in a Pod specification is slightly different than using regular filesystems as a volumeDevices section is added instead of a volumeMounts stanza:

+

apiVersion: v1
+kind: Pod
+metadata:
+  name: my-pod-block
+spec:
+  containers:
+    - name: my-null-pod
+      image: fedora:31
+      command: ["/bin/sh", "-c"]
+      args: [ "tail -f /dev/null" ]
+      volumeDevices:
+        - name: data
+          devicePath: /dev/xvda
+  volumes:
+    - name: data
+      persistentVolumeClaim:
+        claimName: my-pvc-block
+

+
+

Seealso

+

There's an in-depth tutorial available on HPE Developer that covers raw block volumes: Using Raw Block Volumes on Kubernetes

+
+

Using CSI Snapshots

+

CSI introduces snapshots as native objects in Kubernetes that allows end-users to provision VolumeSnapshot objects from an existing PersistentVolumeClaim. New PVCs may then be created using the snapshot as a source.

+
+

Tip

+

Ensure CSI snapshots are enabled. +
There's a tutorial in the Video Gallery on how to use CSI snapshots and clones.

+
+

Start by creating a VolumeSnapshotClass referencing the Secret and defining additional snapshot parameters.

+

apiVersion: snapshot.storage.k8s.io/v1
+kind: VolumeSnapshotClass
+metadata:
+  name: hpe-snapshot
+  annotations:
+    snapshot.storage.kubernetes.io/is-default-class: "true"
+driver: csi.hpe.com
+deletionPolicy: Delete
+parameters:
+  description: "Snapshot created by the HPE CSI Driver"
+  csi.storage.k8s.io/snapshotter-secret-name: hpe-backend
+  csi.storage.k8s.io/snapshotter-secret-namespace: hpe-storage
+  csi.storage.k8s.io/snapshotter-list-secret-name: hpe-backend
+  csi.storage.k8s.io/snapshotter-list-secret-namespace: hpe-storage
+

+
+

Note

+

Container Storage Providers may have optional parameters to the VolumeSnapshotClass.

+
+

Create a VolumeSnapshot. This will create a new snapshot of the volume.

+

apiVersion: snapshot.storage.k8s.io/v1
+kind: VolumeSnapshot
+metadata:
+  name: my-snapshot
+spec:
+  source:
+    persistentVolumeClaimName: my-pvc
+

+
+

Tip

+

If a specific VolumeSnapshotClass is desired, use .spec.volumeSnapshotClassName to call it out.

+
+

Check that a new VolumeSnapshot is created based on your claim:

+

kubectl describe volumesnapshot my-snapshot
+Name:         my-snapshot
+Namespace:    default
+...
+Status:
+  Creation Time:  2019-05-22T15:51:28Z
+  Ready:          true
+  Restore Size:   32Gi
+

+

It's now possible to create a new PersistentVolumeClaim from the VolumeSnapshot.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-pvc-from-snapshot
+spec:
+  dataSource:
+    name: my-snapshot
+    kind: VolumeSnapshot
+    apiGroup: snapshot.storage.k8s.io
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 32Gi
+

+
+

Important

+

The size in .spec.resources.requests.storage must match the .spec.dataSource size.

+
+

The .data.dataSource attribute may also clone PersistentVolumeClaim directly, without creating a VolumeSnapshot.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-pvc-from-pvc
+spec:
+  dataSource:
+    name: my-pvc
+    kind: PersistentVolumeClaim
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 32Gi
+

+

Again, the size in .spec.resources.requests.storage must match the source PersistentVolumeClaim. This can get sticky from an automation perspective as volume expansion is being used on the source volume. It's recommended to inspect the source PersistentVolumeClaim or VolumeSnapshot size prior to creating a clone.

+
+

Learn more

+

For a more comprehensive tutorial on how to use snapshots and clones with CSI on Kubernetes 1.17, see HPE CSI Driver for Kubernetes: Snapshots, Clones and Volume Expansion on HPE Developer.

+
+

Volume Groups

+

PersistentVolumeClaims created in a particular Namespace from the same storage backend may be grouped together in a VolumeGroup. A VolumeGroup is what may be known as a "consistency group" in other storage infrastructure systems. This allows certain attributes to be managed on a abstract group and attributes then applies to all member volumes in the group instead of managing each volume individually. One such aspect is creating snapshots with referential integrity between volumes or setting a performance attribute that would have accounting made on the logical group rather than the individual volume.

+
+

Tip

+

A tutorial on how to use VolumeGroups and SnapshotGroups is available in the Video Gallery.

+
+

Before grouping PeristentVolumeClaims there needs to be a VolumeGroupClass created. It needs to reference a Secret that corresponds to the same backend the PersistentVolumeClaims were created on. A VolumeGroupClass is a cluster resource that needs administrative privileges to create.

+

apiVersion: storage.hpe.com/v1
+kind: VolumeGroupClass
+metadata:
+  name: my-volume-group-class
+provisioner: csi.hpe.com
+deletionPolicy: Delete
+parameters:
+  description: "HPE CSI Driver for Kubernetes Volume Group"
+  csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend
+  csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage
+

+
+

Note

+

The VolumeGroupClass .parameters may contain CSP specifc parameters. Check the documentation of the Container Storage Provider being used.

+
+

Once the VolumeGroupClass is in place, users may create VolumeGroups. The VolumeGroups are just like PersistentVolumeClaims part of a Namespace and both resources need to be in the same Namespace for the grouping to be successful.

+

apiVersion: storage.hpe.com/v1
+kind: VolumeGroup
+metadata:
+  name: my-volume-group
+spec:
+  volumeGroupClassName: my-volume-group-class
+

+

Depending on the CSP being used, the VolumeGroup may reference an object that corresponds to the Kubernetes API object. It's not until users annotates their PersistentVolumeClaims the VolumeGroup gets populated.

+

Adding a PersistentVolumeClaim to a VolumeGroup:

+

kubectl annotate pvc/my-pvc csi.hpe.com/volume-group=my-volume-group
+

+

Removing a PersistentVolumeClaim from a VolumeGroup:

+

kubectl annotate pvc/my-pvc csi.hpe.com/volume-group-
+

+
+

Tip

+

While adding the PersistentVolumeClaim to the VolumeGroup is instant, removal require one reconciliation loop and might not immediately be reflected on the VolumeGroup object.

+
+

Snapshot Groups

+

Being able to create snapshots of the VolumeGroup require the CSI external-snapshotter to be installed and also require a VolumeSnapshotClass configured using the same storage backend as the VolumeGroup. Once those pieces are in place, a SnapshotGroupClass needs to be created. SnapshotGroupClasses are cluster objects created by an administrator.

+

apiVersion: storage.hpe.com/v1
+kind: SnapshotGroupClass
+metadata:
+  name: my-snapshot-group-class
+snapshotter: csi.hpe.com
+deletionPolicy: Delete
+parameters:
+  csi.hpe.com/snapshot-group-snapshotter-secret-name: hpe-backend
+  csi.hpe.com/snapshot-group-snapshotter-secret-namespace: hpe-storage
+

+

Creating a SnapshotGroup is later performed using the VolumeGroup as a source while referencing a SnapshotGroupClass and a VolumeSnapshotClass.

+

apiVersion: storage.hpe.com/v1
+kind: SnapshotGroup
+metadata:
+  name: my-snapshot-group-1
+spec:
+  source:
+    kind: VolumeGroup
+    apiGroup: storage.hpe.com
+    name: my-volume-group
+  snapshotGroupClassName: my-snapshot-group-class
+  volumeSnapshotClassName: hpe-snapshot
+

+

Once the SnapshotGroup has been successfully created, the individual VolumeSnapshots are now available in the Namespace.

+

List VolumeSnapshots:

+

kubectl get volumesnapshots
+

+

If no VolumeSnapshots are being enumerated, check the diagnostics on how to check the component logs and such.

+
+

New feature!

+

Volume Groups and Snapshot Groups got introduced in HPE CSI Driver for Kubernetes 1.4.0.

+
+

Expanding PVCs

+

To perform expansion operations on Kubernetes 1.14+, you must enhance your StorageClass with the .allowVolumeExpansion: true key. Please see base StorageClass parameters for additional information.

+

Then, a volume provisioned by a StorageClass with expansion attributes may have its PersistentVolumeClaim expanded by altering the .spec.resources.requests.storage key of the PersistentVolumeClaim.

+

This may be done by the kubectl patch command.

+

kubectl patch pvc/my-pvc --patch '{"spec": {"resources": {"requests": {"storage": "64Gi"}}}}'
+persistentvolumeclaim/my-pvc patched
+

+

The new PersistentVolumeClaim size may be observed with kubectl get pvc/my-pvc after a few moments.

+

Using PVC Overrides

+

The HPE CSI Driver allows the PersistentVolumeClaim to override the StorageClass parameters by annotating the PersistentVolumeClaim. Define the parameters allowed to be overridden in the StorageClass by setting the allowOverrides parameter:

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-scod-override
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  description: "Volume provisioned by the HPE CSI Driver"
+  accessProtocol: iscsi
+  allowOverrides: description,accessProtocol
+

+

The end-user may now control those parameters (the StorageClass provides the default values).

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-pvc-override
+  annotations:
+    csi.hpe.com/description: "This is my custom description"
+    csi.hpe.com/accessProtocol: fc
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 100Gi
+  storageClassName: hpe-scod-override
+

+

Using Volume Mutations

+

The HPE CSI Driver (version 1.3.0 and later) allows the CSP backend volume to be mutated by annotating the PersistentVolumeClaim. Define the parameters allowed to be mutated in the StorageClass by setting the allowMutations parameter.

+
+

Tip

+

There's a tutorial available on YouTube accessible through the Video Gallery on how to use volume mutations to adapt stateful workloads with the HPE CSI Driver.

+
+
+

Important

+

In order to mutate a StorageClass parameter it needs to have a default value set in the StorageClass. In the example below we'll allow mutatating "description". If the parameter "description" wasn't set when the PersistentVolume was provisioned, no subsequent mutations are allowed. The CSP may set defaults for certain parameters during provisioning, if those are mutable, the mutation will be performed.

+
+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-scod-mutation
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  description: "Volume provisioned by the HPE CSI Driver"
+  allowMutations: description
+

+
+

Note

+

The allowMutations parameter is a comma separated list of values defined by each of the CSPs parameters, except the description parameter, which is common across all CSPs. See the documentation for each CSP on what parameters are mutable.

+
+

The end-user may now control those parameters by editing or patching the PersistentVolumeClaim.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-pvc-mutation
+  annotations:
+    csi.hpe.com/description: "My description needs to change"
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 100Gi
+  storageClassName: hpe-scod-mutation
+

+
+

Good to know

+

As the .spec.csi.volumeAttributes on the PersistentVolume are immutable, the mutations performed on the backend volume are also annotated on the PersistentVolume object.

+
+

Using the NFS Server Provisioner

+

Enabling the NFS Server Provisioner to allow "ReadWriteMany" and "ReadOnlyMany" access mode for a PVC is straightforward. Create a new StorageClass and set .parameters.nfsResources to "true". Any subsequent claim to the StorageClass will create a NFS server Deployment on the cluster with the associated objects running on top of a "ReadWriteOnce" PVC.

+

Any "RWO" claim made against the StorageClass will also create a NFS server Deployment. This allows diverse connectivity options among the Kubernetes worker nodes as the HPE CSI Driver will look for nodes labelled csi.hpe.com/hpe-nfs=true (or using a custom value specified in .parameters.nfsNodeSelector) before submitting the workload for scheduling. This allows dedicated NFS worker nodes without user workloads using taints and tolerations. The NFS server Pod is armed with a csi.hpe.com/hpe-nfs toleration. It's required to taint dedicated NFS worker nodes if they truly need to be dedicated.

+

By default, the NFS Server Provisioner deploy resources in the "hpe-nfs" Namespace. This makes it easy to manage and diagnose. However, to use CSI data management capabilities (VolumeSnapshots and .spec.dataSource) on the PVCs, the NFS resources need to be deployed in the same Namespace as the "RWX"/"ROX" requesting PVC. This is controlled by the nfsNamespace StorageClass parameter. See base StorageClass parameters for more information.

+
+

Tip

+

A comprehensive tutorial is available on HPE Developer on how to get started with the NFS Server Provisioner and the HPE CSI Driver for Kubernetes. There's also a brief tutorial available in the Video Gallery.

+
+

Example StorageClass with "nfsResources" enabled. No CSP specific parameters for clarity.

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-standard-file
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  description: "NFS backend volume created by the HPE CSI Driver for Kubernetes"
+  csi.storage.k8s.io/fstype: ext4
+  nfsResources: "true"
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+

+
+

Note

+

Using XFS may result in stale NFS handles during node failures and outages. Always use ext4 for NFS PVCs. While "allowVolumeExpansion" isn't supported on the NFS PVC, the backend "RWO" PVC does.

+
+

Example use of accessModes:

+
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-rwo-pvc
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 32Gi
+  storageClassName: hpe-nfs
+
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-rwx-pvc
+spec:
+  accessModes:
+  - ReadWriteMany
+  resources:
+    requests:
+      storage: 32Gi
+  storageClassName: hpe-nfs
+
apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-rox-pvc
+spec:
+  accessModes:
+  - ReadOnlyMany
+  resources:
+    requests:
+      storage: 32Gi
+  storageClassName: hpe-nfs
+
+

In the case of declaring a "ROX" PVC, the requesting Pod specification needs to request the PVC as read-only. Example:

+

apiVersion: v1
+kind: Pod
+metadata:
+  name: pod-rox
+spec:
+  containers:
+  - image: busybox
+    name: busybox
+    command:
+      - "sleep"
+      - "300"
+    volumeMounts:
+    - mountPath: /data
+      name: my-vol
+      readOnly: true
+  volumes:
+  - name: my-vol
+    persistentVolumeClaim:
+      claimName: my-rox-pvc
+      readOnly: true
+

+

Requesting an empty read-only volume might not seem practical. The primary use case is to source existing datasets into immutable applications, using either a backend CSP cloning capability or CSI data management feature such as snapshots or existing PVCs.

+

Using a Foreign StorageClass

+

Since HPE CSI Driver for Kubernetes version 2.4.1 it's possible to provision NFS servers on top of non-HPE CSI Driver StorageClasses. The most prominent use case for this functionality is to coexist with the vSphere CSI Driver (VMware vSphere Container Storage Plug-in) in FC environments and provide "RWX" PVCs.

+
Example StorageClass using a foreign StorageClass
+

The HPE CSI Driver only manages the NFS server Deployment, Service and PVC. There must be an existing StorageClass capable of provisioning "RWO" filesystem PVCs.

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-nfs-servers
+provisioner: csi.hpe.com
+parameters:
+  nfsResources: "true"
+  nfsForeignStorageClass: "my-foreign-storageclass-name"
+reclaimPolicy: Delete
+allowVolumeExpansion: false
+

+

Next, provision "RWO" or "RWX" claims from the "hpe-nfs-servers" StorageClass. An NFS server will be provisioned on a "RWO" PVC from the StorageClass "my-foreign-storageclass-name".

+
+

Note

+

Only StorageClasses that uses HPE storage proxied by partner CSI drivers are supported by HPE.

+
+

Limitations and Considerations for the NFS Server Provisioner

+

These are some common issues and gotchas that are useful to know about when planning to use the NFS Server Provisioner.

+
    +
  • The current tested and supported limit for the NFS Server Provisioner is 32 NFS servers per Kubernetes worker node.
  • +
  • The two StorageClass parameters "nfsResourceLimitsCpuM" and "nfsResourceLimitsMemoryMi" control how much CPU and memory it may consume. Tests show that the NFS server consumes about 150MiB at instantiation and 2GiB is the recommended minimum for most workloads. The NFS server Pod is by default limited to 2GiB of memory and 1000 milli CPU.
  • +
  • The NFS PVC can NOT be expanded. If more capacity is needed, expand the "ReadWriteOnce" PVC backing the NFS Server Provisioner. This will result in inaccurate space reporting.
  • +
  • Due to the fact that the NFS Server Provisioner deploys a number of different resources on the hosting cluster per PVC, provisioning times may differ greatly between clusters. On an idle cluster with the NFS Server Provisioning image cached, less than 30 seconds is the most common sighting but it may exceed 30 seconds which may trigger warnings on the requesting PVC. This is normal behavior.
  • +
  • The HPE CSI Driver includes a Pod Monitor to delete Pods that have become unavailable due to the Pod status changing to NodeLost or a node becoming unreachable that the Pod runs on. By default the Pod Monitor only watches the NFS Server Provisioner Deployments. It may be used for any Deployment. See Pod Monitor on how to use it, especially the limitations.
  • +
  • Certain CNIs may have issues to gracefully restore access from the NFS clients to the NFS export. Flannel have exhibited this problem and the most consistent performance have been observed with Calico.
  • +
  • The Volume Mutation feature does not work on the NFS PVC. If changes are needed, perform the change on the backing "ReadWriteOnce" PVC.
  • +
  • As outlined in Using the NFS Server Provisioner, CSI snapshots and cloning of NFS PVCs requires the CSI snapshot and NFS server to reside in the same Namespace. This also applies when using third-party backup software such as Kasten K10. Use the "nfsNamespace" StorageClass parameter to control where to provision resources.
  • +
  • VolumeGroups and SnapshotGroups are only supported on the backing "ReadWriteOnce" PVC. The "volume-group" annotation may be set at the initial creation of the NFS PVC but will have adverse effect on logging as the Volume Group Provisioner tries to add the NFS PVC to the backend consistency group indefinitely.
  • +
  • The NFS servers deployed by the HPE CSI Driver are not managed during CSI driver upgrades. Manual upgrade is required.
  • +
  • Using the same network interface for NFS and block IO has shown suboptimal performance. Use FC for the block storage for the best performance.
  • +
  • A single NFS server instance is capable of 100GigE wirespeed with large sequential workloads and up to 200,000 IOPS with small IO using bare-metal nodes and multiple clients.
  • +
  • Using ext4 as the backing filesystem has shown better performance with simultaneous writers to the same file.
  • +
  • Additional configuration and considerations may be required when using the NFS Server Provisioner with Red Hat OpenShift. See NFS Server Provisioner Considerations for OpenShift.
  • +
  • XFS has proven troublesome to use as a backend "RWO" volume filesystem, leaving stale NFS handles for clients. Use ext4 as the "csi.storage.k8s.io/fstype" StorageClass parameter for best results.
  • +
  • The NFS servers provide a "ClusterIP" Service. It is possible to expose the NFS servers outside the cluster for external NFS clients. Understand the scope and limitations in Auxillary Operations.
  • +
+

See diagnosing NFS Server Provisioner issues for further details.

+

Using Volume Encryption

+

From version 2.0.0 and onwards of the CSI driver supports host-based volume encryption for any of the CSPs supported by the CSI driver.

+

Host-based volume encryption is controlled by StorageClass parameters configured by the Kubernetes administrator and may be configured to be overridden by Kubernetes users. In the below example, a single Secret is used to encrypt and decrypt all volumes provisioned by the StorageClass.

+

First, create a Secret, in this example we'll use the "hpe-storage" Namespace.

+

apiVersion: v1
+kind: Secret
+metadata:
+  name: my-passphrase
+  namespace: hpe-storage
+stringData:
+  hostEncryptionPassphrase: "HPE CSI Driver for Kubernetes 2.0.0 Rocks!"
+

+
+

Tip

+

The "hostEncryptionPassphrase" can be up to 512 characters.

+
+

Next, incorporate the Secret into a StorageClass.

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-encrypted
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  description: "Volume provisioned by the HPE CSI Driver"
+  hostEncryption: "true"
+  hostEncryptionSecretName: my-passphrase
+  hostEncryptionSecretNamespace: hpe-storage
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+

+

Next, create a PersistentVolumeClaim that uses the "hpe-encrypted" StorageClass:

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-encrypted-pvc
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 100Gi
+  storageClassName: hpe-encrypted
+

+

Attach a basic Pod to verify functionality.

+

kind: Pod
+apiVersion: v1
+metadata:
+  name: my-pod
+spec:
+  containers:
+    - name: pod-datelog-1
+      image: nginx
+      command: ["bin/sh"]
+      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+      volumeMounts:
+        - name: export1
+          mountPath: /data
+    - name: pod-datelog-2
+      image: debian
+      command: ["bin/sh"]
+      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+      volumeMounts:
+        - name: export1
+          mountPath: /data
+  volumes:
+    - name: export1
+      persistentVolumeClaim:
+        claimName: my-encrypted-pvc
+

+

Once the Pod comes up, verify that the volume is encrypted.

+

$ kubectl exec -it my-pod -c pod-datelog-1 -- df -h /data
+Filesystem              Size  Used Avail Use% Mounted on
+/dev/mapper/enc-mpatha  100G   33M  100G   1% /data
+

+

Host-based volume encryption is in effect if the "enc" prefix is seen on the multipath device name.

+
+

Seealso

+

For an in-depth tutorial and more advanced use cases for host-based volume encryption, check out this blog post on HPE Developer: Host-based Volume Encryption with HPE CSI Driver for Kubernetes

+
+

Topology and volumeBindingMode

+

With CSI driver v2.5.0 and newer, basic CSI topology information can be associated with a single backend from a StorageClass. For backwards compatibility, only volumeBindingMode: WaitForFirstConsumer require topology labels assigned to compute nodes. Using the default volumeBindingMode of Immediate will preserve the behavior prior to v2.5.0.

+
+

Tip

+

The "csi-provisioner" is deployed with --feature-gates Topology=true and --immediate-topology=false. It's impact on volume provisioning and accessibility can be found here.

+
+

Assume a simple use case where only a handful of nodes in a Kubernetes cluster have Fibre Channel adapters installed. Workloads with persistent storage requirements from a particular StorageClass should be deployed onto those nodes only.

+

Label Compute Nodes

+

Nodes with the label csi.hpe.com/zone are considered during topology accessibility assessments. Assume three nodes in the cluster have FC adapters.

+

kubectl label node/my-node{1..3} csi.hpe.com/zone=fc --overwrite
+

+

If the CSI driver is already installed on the cluster, the CSI node driver needs to be restarted for the node labels to propagate.

+

kubectl rollout restart -n hpe-storage ds/hpe-csi-node
+

+

Create StorageClass with Topology Information

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  annotations:
+    storageclass.kubernetes.io/is-default-class: "true"
+  name: hpe-standard-fc
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  description: "Volume created by the HPE CSI Driver for Kubernetes"
+  accessProtocol: fc
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+volumeBindingMode: WaitForFirstConsumer
+allowedTopologies:
+- matchLabelExpressions:
+  - key: csi.hpe.com/zone
+    values:
+    - fc
+

+

Any workload provisioning PVCs from the above StorageClass will now be scheduled on nodes labeled csi.hpe.com/zone=fc.

+
+

Note

+

The allowedTopologies key may be omitted if there's only a single topology applied to a subset of nodes. The nodes always need to be labeled when using volumeBindingMode: WaitForFirstConsumer. If all nodes have access to a backend, set volumeBindingMode: Immediate and omit allowedTopologies.

+
+

Static Provisioning

+

How to map an existing backend volume to a PersistentVolume differs between the CSP implementations.

+ +

Further Reading

+

The official Kubernetes documentation contains comprehensive documentation on how to markup PersistentVolumeClaim and StorageClass API objects to tweak certain behaviors.

+

Each CSP has a set of unique StorageClass parameters that may be tweaked to accommodate a wide variety of use cases. Please see the documentation of the respective CSP for more details.

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/css/fonts/Roboto-Slab-Bold.woff b/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 00000000..6cb60000 Binary files /dev/null and b/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/css/fonts/Roboto-Slab-Bold.woff2 b/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 00000000..7059e231 Binary files /dev/null and b/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/css/fonts/Roboto-Slab-Regular.woff b/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 00000000..f815f63f Binary files /dev/null and b/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/css/fonts/Roboto-Slab-Regular.woff2 b/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 00000000..f2c76e5b Binary files /dev/null and b/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/css/fonts/fontawesome-webfont.eot b/css/fonts/fontawesome-webfont.eot new file mode 100644 index 00000000..e9f60ca9 Binary files /dev/null and b/css/fonts/fontawesome-webfont.eot differ diff --git a/css/fonts/fontawesome-webfont.svg b/css/fonts/fontawesome-webfont.svg new file mode 100644 index 00000000..855c845e --- /dev/null +++ b/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/css/fonts/fontawesome-webfont.ttf b/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 00000000..35acda2f Binary files /dev/null and b/css/fonts/fontawesome-webfont.ttf differ diff --git a/css/fonts/fontawesome-webfont.woff b/css/fonts/fontawesome-webfont.woff new file mode 100644 index 00000000..400014a4 Binary files /dev/null and b/css/fonts/fontawesome-webfont.woff differ diff --git a/css/fonts/fontawesome-webfont.woff2 b/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 00000000..4d13fc60 Binary files /dev/null and b/css/fonts/fontawesome-webfont.woff2 differ diff --git a/css/fonts/lato-bold-italic.woff b/css/fonts/lato-bold-italic.woff new file mode 100644 index 00000000..88ad05b9 Binary files /dev/null and b/css/fonts/lato-bold-italic.woff differ diff --git a/css/fonts/lato-bold-italic.woff2 b/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 00000000..c4e3d804 Binary files /dev/null and b/css/fonts/lato-bold-italic.woff2 differ diff --git a/css/fonts/lato-bold.woff b/css/fonts/lato-bold.woff new file mode 100644 index 00000000..c6dff51f Binary files /dev/null and b/css/fonts/lato-bold.woff differ diff --git a/css/fonts/lato-bold.woff2 b/css/fonts/lato-bold.woff2 new file mode 100644 index 00000000..bb195043 Binary files /dev/null and b/css/fonts/lato-bold.woff2 differ diff --git a/css/fonts/lato-normal-italic.woff b/css/fonts/lato-normal-italic.woff new file mode 100644 index 00000000..76114bc0 Binary files /dev/null and b/css/fonts/lato-normal-italic.woff differ diff --git a/css/fonts/lato-normal-italic.woff2 b/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 00000000..3404f37e Binary files /dev/null and b/css/fonts/lato-normal-italic.woff2 differ diff --git a/css/fonts/lato-normal.woff b/css/fonts/lato-normal.woff new file mode 100644 index 00000000..ae1307ff Binary files /dev/null and b/css/fonts/lato-normal.woff differ diff --git a/css/fonts/lato-normal.woff2 b/css/fonts/lato-normal.woff2 new file mode 100644 index 00000000..3bf98433 Binary files /dev/null and b/css/fonts/lato-normal.woff2 differ diff --git a/css/hpedev.css b/css/hpedev.css new file mode 100644 index 00000000..2d5031d4 --- /dev/null +++ b/css/hpedev.css @@ -0,0 +1,155 @@ +.wy-side-nav-search, .wy-nav-top { + background: #477083; +} +.wy-nav-side { + width: 300px; + background: #425563; +} + +.wy-menu-vertical a:hover { + background-color: #00b388; + color: #fff; +} + +.wy-menu-vertical span { + font-weight: bold; + font-size: 105%; + color: #ddd; +} + +.wy-menu-vertical a { + line-height: 100%; + padding-left: 2.6em; + color: #b3b3b3; +} + +.wy-menu-vertical header, .wy-menu-vertical p.caption { + height: 16px; + line-height: 16px; +} + +/* blue */ +.rst-content .note { + background: #DBF4FF; +} +.rst-content .note .admonition-title { + background-color: #67B2D5; +} +.rst-content .seealso { + background: #DBF4FF; +} +.rst-content .seealso .admonition-title { + background-color: #67B2D5; +} + +/* green */ +.rst-content .tip { + background: #DAFFFD; +} +.rst-content .tip .admonition-title { + background-color: #00B6AE; +} +.rst-content .important { + background: #DAFFFD; +} +.rst-content .important .admonition-title { + background-color: #00B6AE; +} +.rst-content .hint { + background: #DAFFFD; +} +.rst-content .hint .admonition-title { + background-color: #00B6AE; +} + +/* orange */ +.rst-content .warning { + background: #FFE3D7; +} +.rst-content .warning .admonition-title { + background-color: #FF9161; +} +.rst-content .caution { + background: #FFE3D7; +} +.rst-content .caution .admonition-title { + background-color: #FF9161; +} + +/* red */ +.rst-content .danger { + background: #E9E2F0; +} +.rst-content .danger .admonition-title { + background-color: #A250F0; +} +.rst-content .error { + background: #E9E2F0; +} +.rst-content .error .admonition-title { + background-color: #A250F0; +} + +/* tabbed code fences */ +.md-fenced-code-tabs * { + box-sizing: border-box; +} + +.md-fenced-code-tabs { + box-sizing: border-box; + display: flex; + position: relative; + flex-wrap: wrap; + width: 100%; + background-color: #eee; + margin-bottom: 25px; +} + +.md-fenced-code-tabs input { + position: absolute; + opacity: 0; +} + +.md-fenced-code-tabs label { + width: auto; + padding: 0 4px; + margin: 0 8px; + cursor: pointer; + color: #2980b9; + margin-top: 10px; +} + +.md-fenced-code-tabs input:checked + label { + color: #333; + font-weight: bold; +} + +.md-fenced-code-tabs .code-tabpanel { + display: none; + width: 100%; + margin-top: 10px; + order: 99; +} + +.md-fenced-code-tabs input:checked + label + .code-tabpanel { + display: block; +} + +.md-fenced-code-tabs pre, +.md-fenced-code-tabs .codehilite { + width: 100%; + margin: 0px; +} + +.rst-versions { + width: 310px; +} + +.rst-content .section ol li > ol, .rst-content .section ol li > ul, .rst-content .section ul li > ol, .rst-content .section ul li > ul, .rst-content .toctree-wrapper ol li > ol, .rst-content .toctree-wrapper ol li > ul, .rst-content .toctree-wrapper ul li > ol, .rst-content .toctree-wrapper ul li > ul, .rst-content section ol li > ol, .rst-content section ol li > ul, .rst-content section ul li > ol, .rst-content section ul li > ul, .rst-content .section ol li > *, .rst-content .section ul li > *, .rst-content .toctree-wrapper ol li > *, .rst-content .toctree-wrapper ul li > *, .rst-content section ol li > *, .rst-content section ul li > * { + margin-top: 0px; + margin-bottom: 0px; +} + +.wy-table-responsive table td, .wy-table-responsive table th { + white-space: normal; +} diff --git a/css/theme.css b/css/theme.css new file mode 100644 index 00000000..7e03995c --- /dev/null +++ b/css/theme.css @@ -0,0 +1,13 @@ +/* + * This file is copied from the upstream ReadTheDocs Sphinx + * theme. To aid upgradability this file should *not* be edited. + * modifications we need should be included in theme_extra.css. + * + * https://github.com/readthedocs/sphinx_rtd_theme + */ + + /* sphinx_rtd_theme version 1.0.0 | MIT license */ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,.wy-nav-top a,.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs li{display:inline-block}.wy-breadcrumbs li.wy-breadcrumbs-aside{float:right}.wy-breadcrumbs li a{display:inline-block;padding:5px}.wy-breadcrumbs li a:first-child{padding-left:0}.rst-content .wy-breadcrumbs li tt,.wy-breadcrumbs li .rst-content tt,.wy-breadcrumbs li code{padding:5px;border:none;background:none}.rst-content .wy-breadcrumbs li tt.literal,.wy-breadcrumbs li .rst-content tt.literal,.wy-breadcrumbs li code.literal{color:#404040}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.field-list>dt:after,html.writer-html5 .rst-content dl.footnote>dt:after{content:":"}html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.footnote>dt>span.brackets{margin-right:.5rem}html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{font-style:italic}html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.footnote>dd p,html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{font-size:inherit;line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.field-list)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) dl:not(.field-list)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.field-list)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) dl:not(.field-list)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel{border:1px solid #7fbbe3;background:#e7f2fa;font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} diff --git a/css/theme_extra.css b/css/theme_extra.css new file mode 100644 index 00000000..9f4b063c --- /dev/null +++ b/css/theme_extra.css @@ -0,0 +1,191 @@ +/* + * Wrap inline code samples otherwise they shoot of the side and + * can't be read at all. + * + * https://github.com/mkdocs/mkdocs/issues/313 + * https://github.com/mkdocs/mkdocs/issues/233 + * https://github.com/mkdocs/mkdocs/issues/834 + */ +.rst-content code { + white-space: pre-wrap; + word-wrap: break-word; + padding: 2px 5px; +} + +/** + * Make code blocks display as blocks and give them the appropriate + * font size and padding. + * + * https://github.com/mkdocs/mkdocs/issues/855 + * https://github.com/mkdocs/mkdocs/issues/834 + * https://github.com/mkdocs/mkdocs/issues/233 + */ +.rst-content pre code { + white-space: pre; + word-wrap: normal; + display: block; + padding: 12px; + font-size: 12px; +} + +/** + * Fix code colors + * + * https://github.com/mkdocs/mkdocs/issues/2027 + */ +.rst-content code { + color: #E74C3C; +} + +.rst-content pre code { + color: #000; + background: #f8f8f8; +} + +/* + * Fix link colors when the link text is inline code. + * + * https://github.com/mkdocs/mkdocs/issues/718 + */ +a code { + color: #2980B9; +} +a:hover code { + color: #3091d1; +} +a:visited code { + color: #9B59B6; +} + +/* + * The CSS classes from highlight.js seem to clash with the + * ReadTheDocs theme causing some code to be incorrectly made + * bold and italic. + * + * https://github.com/mkdocs/mkdocs/issues/411 + */ +pre .cs, pre .c { + font-weight: inherit; + font-style: inherit; +} + +/* + * Fix some issues with the theme and non-highlighted code + * samples. Without and highlighting styles attached the + * formatting is broken. + * + * https://github.com/mkdocs/mkdocs/issues/319 + */ +.rst-content .no-highlight { + display: block; + padding: 0.5em; + color: #333; +} + + +/* + * Additions specific to the search functionality provided by MkDocs + */ + +.search-results { + margin-top: 23px; +} + +.search-results article { + border-top: 1px solid #E1E4E5; + padding-top: 24px; +} + +.search-results article:first-child { + border-top: none; +} + +form .search-query { + width: 100%; + border-radius: 50px; + padding: 6px 12px; /* csslint allow: box-model */ + border-color: #D1D4D5; +} + +/* + * Improve inline code blocks within admonitions. + * + * https://github.com/mkdocs/mkdocs/issues/656 + */ + .rst-content .admonition code { + color: #404040; + border: 1px solid #c7c9cb; + border: 1px solid rgba(0, 0, 0, 0.2); + background: #f8fbfd; + background: rgba(255, 255, 255, 0.7); +} + +/* + * Account for wide tables which go off the side. + * Override borders to avoid weirdness on narrow tables. + * + * https://github.com/mkdocs/mkdocs/issues/834 + * https://github.com/mkdocs/mkdocs/pull/1034 + */ +.rst-content .section .docutils { + width: 100%; + overflow: auto; + display: block; + border: none; +} + +td, th { + border: 1px solid #e1e4e5 !important; /* csslint allow: important */ + border-collapse: collapse; +} + +/* + * Without the following amendments, the navigation in the theme will be + * slightly cut off. This is due to the fact that the .wy-nav-side has a + * padding-bottom of 2em, which must not necessarily align with the font-size of + * 90 % on the .rst-current-version container, combined with the padding of 12px + * above and below. These amendments fix this in two steps: First, make sure the + * .rst-current-version container has a fixed height of 40px, achieved using + * line-height, and then applying a padding-bottom of 40px to this container. In + * a second step, the items within that container are re-aligned using flexbox. + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ + .wy-nav-side { + padding-bottom: 40px; +} + +/* + * The second step of above amendment: Here we make sure the items are aligned + * correctly within the .rst-current-version container. Using flexbox, we + * achieve it in such a way that it will look like the following: + * + * [No repo_name] + * Next >> // On the first page + * << Previous Next >> // On all subsequent pages + * + * [With repo_name] + * Next >> // On the first page + * << Previous Next >> // On all subsequent pages + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ +.rst-versions .rst-current-version { + padding: 0 12px; + display: flex; + font-size: initial; + justify-content: space-between; + align-items: center; + line-height: 40px; +} + +/* + * Please note that this amendment also involves removing certain inline-styles + * from the file ./mkdocs/themes/readthedocs/versions.html. + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ +.rst-current-version span { + flex: 1; + text-align: center; +} diff --git a/docker_volume_plugins/hpe_cloud_volumes.html b/docker_volume_plugins/hpe_cloud_volumes.html new file mode 100644 index 00000000..41aabe7e --- /dev/null +++ b/docker_volume_plugins/hpe_cloud_volumes.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/docker_volume_plugins/hpe_cloud_volumes/index.html b/docker_volume_plugins/hpe_cloud_volumes/index.html new file mode 100644 index 00000000..faab564b --- /dev/null +++ b/docker_volume_plugins/hpe_cloud_volumes/index.html @@ -0,0 +1,580 @@ + + + + + + + + + + + + + + + + + + Index - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Index
  • +
  • +
  • +
+
+
+
+
+ +
+

Expired content

+

The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.

+
+

Introduction

+

This is the documentation for HPE Cloud Volumes Plugin for Docker. It allows dynamic provisioning of Docker Volumes on standalone Docker Engine or Docker Swarm nodes.

+ +

Requirements

+
    +
  • Docker Engine 17.09 or greater
  • +
  • If using Docker Enterprise Edition 2.x, the plugin is only supported in swarmmode
  • +
  • Recent Red Hat, Debian or Ubuntu-based Linux distribution
  • +
  • US regions only
  • +
+ + + + + + + + + + + + + +
PluginRelease Notes
3.1.0v3.1.0
+
+

Note

+

Docker does not support certified and managed Docker Volume plugins with Docker EE Kubernetes. If you want to use Kubernetes on Docker EE with HPE Nimble Storage, please use the HPE Volume Driver for Kubernetes FlexVolume Plugin or the HPE CSI Driver for Kubernetes depending on your situation.

+
+

Limitations

+

HPE Cloud Volumes provides a Docker certified plugin delivered through the Docker Store. Certain features and capabilities are not available through the managed plugin. Please understand these limitations before deploying either of these plugins.

+

The managed plugin does NOT provide:

+
    +
  • Support for Docker's release of Kubernetes in Docker Enterprise Edition 2.x
  • +
  • Support for Windows Containers
  • +
+

The managed plugin does provide a simple way to manage HPE Cloud Volumes integration on your Docker instances using Docker's interface to install and manage the plugin.

+

Installation

+

Plugin privileges

+

In order to create connections, attach devices and mount file systems, the plugin requires more privileges than a standard application container. These privileges are enumerated during installation. These permissions need to be granted for the plugin to operate correctly.

+

Plugin "cvblock" is requesting the following privileges:
+ - network: [host]
+ - mount: [/dev]
+ - mount: [/run/lock]
+ - mount: [/sys]
+ - mount: [/etc]
+ - mount: [/var/lib]
+ - mount: [/var/run/docker.sock]
+ - mount: [/sbin/iscsiadm]
+ - mount: [/lib/modules]
+ - mount: [/usr/lib64]
+ - allow-all-devices: [true]
+ - capabilities: [CAP_SYS_ADMIN CAP_SYS_MODULE CAP_MKNOD]
+

+

Host configuration and installation

+

Setting up the plugin varies between Linux distributions.

+

These procedures requires root privileges on the cloud instance.

+

Red Hat 7.5+, CentOS 7.5+:

+

yum install -y iscsi-initiator-utils device-mapper-multipath
+docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0
+docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME=<access_key> PROVIDER_PASSWORD=<access_secret>
+docker plugin enable cvblock
+systemctl daemon-reload
+systemctl enable iscsid multipathd
+systemctl start iscsid multipathd
+

+

Ubuntu 16.04 LTS and Ubuntu 18.04 LTS:

+

apt-get install -y open-iscsi multipath-tools xfsprogs
+modprobe xfs
+sed -i"" -e "\$axfs" /etc/modules
+docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0
+docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME=<access_key> PROVIDER_PASSWORD=<access_secret> glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable cvblock
+systemctl daemon-reload
+systemctl restart open-iscsi multipath-tools
+

+

Debian 9.x (stable):

+

apt-get install -y open-iscsi multipath-tools xfsprogs
+modprobe xfs
+sed -i"" -e "\$axfs" /etc/modules
+docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0
+docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME=<access_key> PROVIDER_PASSWORD=<access_secret> iscsiadm.source=/usr/bin/iscsiadm glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable cvblock
+systemctl daemon-reload
+systemctl restart open-iscsi multipath-tools
+

+

Making changes

+

The docker plugin set command can only be used on the plugin if it is disabled. To disable the plugin, use the docker plugin disable command. For example:

+

docker plugin disable cvblock
+

+

List of parameters which are supported to be settable by the plugin

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
PROVIDER_IPHPE Cloud Volumes portal""
PROVIDER_USERNAMEHPE Cloud Volumes username""
PROVIDER_PASSWORDHPE Cloud Volumes password""
PROVIDER_REMOVEUnassociate Plugin from HPE Cloud Volumesfalse
LOG_LEVELLog level of the plugin (info, debug, or trace)debug
SCOPEScope of the plugin (global or local)global
+

In the event of reassociating the plugin with a different HPE Cloud Volumes portal, certain procedures need to be followed:

+

Disable the plugin

+

docker plugin disable cvblock
+

+

Set new paramters

+

docker plugin set cvblock PROVIDER_REMOVE=true
+

+

Enable the plugin

+

docker plugin enable cvblock
+

+

Disable the plugin

+

docker plugin disable cvblock
+

+

The plugin is now ready for re-configuration

+

docker plugin set cvblock PROVIDER_IP=< New portal address > PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin PROVIDER_REMOVE=false
+

+
+

Note

+

The PROVIDER_REMOVE=false parameter must be set if the plugin ever has been unassociated from a HPE Cloud Volumes portal.

+
+

Configuration files and options

+

The configuration directory for the plugin is /etc/hpe-storage on the host. Files in this directory are preserved between plugin upgrades. The /etc/hpe-storage/volume-driver.json file contains three sections, global, defaults and overrides. The global options are plugin runtime parameters and doesn't have any end-user configurable keys at this time.

+

The defaults map allows the docker host administrator to set default options during volume creation. The docker user may override these default options with their own values for a specific option.

+

The overrides map allows the docker host administrator to enforce a certain option for every volume creation. The docker user may not override the option and any attempt to do so will be silently ignored.

+
+

Note

+

defaults and overrides are dynamically read during runtime while global changes require a plugin restart.

+
+

Example config file in /etc/hpe-storage/volume-driver.json:

+

{
+  "global": {
+            "snapPrefix": "BaseFor",
+            "initiators": ["eth0"],
+            "automatedConnection": true,
+            "existingCloudSubnet": "10.1.0.0/24",
+            "region": "us-east-1",
+            "privateCloud": "vpc-data",
+            "cloudComputeProvider": "Amazon AWS"
+  },
+  "defaults": {
+            "limitIOPS": 1000,
+            "fsOwner": "0:0",
+            "fsMode": "600",
+            "description": "Volume provisioned by the HPE Volume Driver for Kubernetes FlexVolume Plugin",
+            "perfPolicy": "Other",
+            "protectionTemplate": "twicedaily:4",
+            "encryption": true,
+            "volumeType": "PF",
+            "destroyOnRm": true
+  },
+  "overrides": {
+  }
+}
+

+

For an exhaustive list of options use the help option from the docker CLI:

+

$ docker volume create -d cvblock -o help
+

+

Node fencing

+

If you are considering using any Docker clustering technologies for your Docker deployment, it is important to understand the fencing mechanism used to protect data. Attaching the same Docker Volume to multiple containers on the same host is fully supported. Mounting the same volume on multiple hosts is not supported.

+

Docker does not provide a fencing mechanism for nodes that have become disconnected from the Docker Swarm. This results in the isolated nodes continuing to run their containers. When the containers are rescheduled on a surviving node, the Docker Engine will request that the Docker Volume(s) be mounted. In order to prevent data corruption, the Docker Volume Plugin will stop serving the Docker Volume to the original node before mounting it on the newly requested node.

+

During a mount request, the Docker Volume Plugin inspects the ACR (Access Control Record) on the volume. If the ACR does not match the initiator requesting to mount the volume, the ACR is removed and the volume taken offline. The volume is now fenced off and other nodes are unable to access any data in the volume.

+

The volume then receives a new ACR matching the requesting initiator, and it is mounted for the container requesting the volume. This is done because the volumes are formatted with XFS, which is not a clustered filesystem and can be corrupted if the same volume is mounted to multiple hosts.

+

The side effect of a fenced node is that I/O hangs indefinitely, and the initiator is rejected during login. If the fenced node rejoins the Docker Swarm using Docker SwarmKit, the swarm tries to shut down the services that were rescheduled elsewhere to maintain the desired replica set for the service. This operation will also hang indefinitely waiting for I/O.

+

We recommend running a dedicated Docker host that does not host any other critical applications besides the Docker Engine. Doing this supports a safe way to reboot a node after a grace period and have it start cleanly when a hung task is detected. Otherwise, the node can remain in the hung state indefinitely.

+

The following kernel parameters control the system behavior when a hung task is detected:

+

# Reset after these many seconds after a panic
+kernel.panic = 5
+
+# I do consider hung tasks reason enough to panic
+kernel.hung_task_panic = 1
+
+# To not panic in vain, I'll wait these many seconds before I declare a hung task
+kernel.hung_task_timeout_secs = 150
+

+

Add these parameters to the /etc/sysctl.d/99-hung_task_timeout.conf file and reboot the system.

+
+

Important

+

Docker SwarmKit declares a node as failed after five (5) seconds. Services are then rescheduled and up and running again in less than ten (10) seconds. The parameters noted above provide the system a way to manage other tasks that may appear to be hung and avoid a system panic.

+
+

Usage

+

These are some basic examples on how to use the HPE Cloud Volumes Plugin for Docker.

+

Create a Docker Volume

+

Using docker volume create.

+
+

Note

+

The plugin applies a set of default options when you create new volumes unless you override them using the volume create -o key=value option flags.

+
+

Create a Docker volume with a custom description:

+

docker volume create -d cvblock -o description="My volume description" --name myvol1 
+

+

(Optional) Inspect the new volume:

+

docker volume inspect myvol1
+

+

(Optional) Attach the volume to an interactive container.

+

docker run -it --rm -v myvol1:/data bash
+

+

The volume is mounted inside the container on /data.

+

Clone a Docker Volume

+

Use the docker volume create command with the cloneOf option to clone a Docker volume to a new Docker volume.

+

Clone the Docker volume named myvol1 to a new Docker volume named myvol1-clone.

+

docker volume create -d cvblock -o cloneOf=myvol1 --name=myvol1-clone
+

+

(Optional) Select a snapshot on which to base the clone.

+

docker volume create -d cvblock -o snapshot=mysnap1 -o cloneOf=myvol1 --name=myvol2-clone
+

+

Provisioning Docker Volumes

+

There are several ways to provision a Docker volume depending on what tools are used:

+
    +
  • Docker Engine (CLI)
  • +
  • Docker Compose file with either Docker UCP or Docker Engine
  • +
+

The Docker Volume plugin leverages the existing Docker CLI and APIs, therefor all native Docker tools may be used to provision a volume.

+
+

Note

+

The plugin applies a set of default volume create options. Unless you override the default options using the volume option flags, the defaults are applied when you create volumes. For example, the default volume size is 10GiB.

+
+

Config file volume-driver.json, which is stored at /etc/hpe-storage/volume-driver.json:

+

{
+    "global":   {},
+    "defaults": {
+                 "sizeInGiB":"10",
+                 "limitIOPS":"-1",
+                 "limitMBPS":"-1",
+                 "perfPolicy": "DockerDefault",
+                },
+    "overrides":{}
+}
+

+

Import a Volume to Docker

+

Take the volume you want to import offline before importing it. For information about how to take a volume offline, refer to the HPE Cloud Volumes documentation. Use the create command with the importVol option to import an HPE Cloud Volume to Docker and name it.

+

Import the HPE Cloud Volume named mycloudvol as a Docker volume named myvol3-imported.

+

docker volume create –d cvblock -o importVol=mycloudvol --name=myvol3-imported
+

+

Import a volume snapshot to Docker

+

Use the create command with the importVolAsClone option to import a HPE Cloud Volume snapshot as a Docker volume. Optionally, specify a particular snapshot on the HPE Cloud Volume using the snapshot option.

+

Import the HPE Cloud Volumes snapshot aSnapshot on the volume importMe as a Docker volume named importedSnap.

+

docker volume create -d cvblock -o importVolAsClone=mycloudvol -o snapshot=mysnap1 --name=myvol4-clone
+

+
+

Note

+

If no snapshot is specified, the latest snapshot on the volume is imported.

+
+

Restore an offline Docker Volume with specified snapshot

+

It's important that the volume to be restored is in an offline state on the array.

+

If the volume snapshot is not specified, the last volume snapshot is used.

+

docker volume create -d cvblock -o importVol=myvol1.docker -o forceImport -o restore -o snapshot=mysnap1 --name=myvol1-restored
+

+

List volumes

+

List Docker volumes.

+

docker volume ls
+DRIVER                     VOLUME NAME
+cvblock:latest              myvol1
+cvblock:latest              myvol1-clone
+

+

Remove a Docker Volume

+

When you remove volumes from Docker control they are set to the offline state on the array. Access to the volumes and related snapshots using the Docker Volume plugin can be reestablished.

+
+

Note

+

To delete volumes from the HPE Cloud Volumes portal using the remove command, the volume should have been created with a -o destroyOnRm flag.

+
+

Important: Be aware that when this option is set to true, volumes and all related snapshots are deleted from the group, and can no longer be accessed by the Docker Volume plugin.

+

Remove the volume named myvol1.

+

docker volume rm myvol1
+

+

Uninstall

+

The plugin can be removed using the docker plugin rm command. This command will not remove the configuration directory (/etc/hpe-storage/).

+

docker plugin rm cvblock
+

+

Troubleshooting

+

The config directory is at /etc/hpe-storage/. When a plugin is installed and enabled, the HPE Cloud Volumes certificates are created in the config directory.

+

ls -l /etc/hpe-storage/
+total 16
+-r-------- 1 root root 1159 Aug  2 00:20 container_provider_host.cert
+-r-------- 1 root root 1671 Aug  2 00:20 container_provider_host.key
+-r-------- 1 root root 1521 Aug  2 00:20 container_provider_server.cert
+

+

Additionally there is a config file volume-driver.json present at the same location. This file can be edited +to set default parameters for create volumes for docker.

+

Log file location

+

The docker plugin logs are located at /var/log/hpe-docker-plugin.log

+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/docker_volume_plugins/hpe_nimble_storage.html b/docker_volume_plugins/hpe_nimble_storage.html new file mode 100644 index 00000000..b7aa7a5e --- /dev/null +++ b/docker_volume_plugins/hpe_nimble_storage.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/docker_volume_plugins/hpe_nimble_storage/index.html b/docker_volume_plugins/hpe_nimble_storage/index.html new file mode 100644 index 00000000..54d018ce --- /dev/null +++ b/docker_volume_plugins/hpe_nimble_storage/index.html @@ -0,0 +1,734 @@ + + + + + + + + + + + + + + + + + + Index - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Index
  • +
  • +
  • +
+
+
+
+
+ +
+

Expired content

+

The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.

+
+

Introduction

+

This is the documentation for HPE Nimble Storage Volume Plugin for Docker. It allows dynamic provisioning of Docker Volumes on standalone Docker Engine or Docker Swarm nodes.

+ +

Requirements

+
    +
  • Docker Engine 17.09 or greater
  • +
  • If using Docker Enterprise Edition 2.x, the plugin is only supported in swarmmode
  • +
  • Recent Red Hat, Debian or Ubuntu-based Linux distribution
  • +
  • NimbleOS 5.0.8/5.1.3 or greater
  • +
+ + + + + + + + + + + + + + + + + + + + +
PluginHPE Nimble Storage VersionRelease Notes
3.0.05.0.8.x and 5.1.3.x onwardsv3.0.0
3.1.05.0.8.x and 5.1.3.x onwardsv3.1.0
+
+

Note

+

Docker does not support certified and managed Docker Volume plugins with Docker EE Kubernetes. If you want to use Kubernetes on Docker EE with HPE Nimble Storage, please use the HPE Volume Driver for Kubernetes FlexVolume Plugin or the HPE CSI Driver for Kubernetes depending on your situation.

+
+

Limitations

+

HPE Nimble Storage provides a Docker certified plugin delivered through the Docker Store. HPE Nimble Storage also provides a Docker Volume plugin for Windows Containers, it's available on HPE InfoSight along with its documentation. Certain features and capabilities are not available through the managed plugin. Please understand these limitations before deploying either of these plugins.

+

The managed plugin does NOT provide:

+
    +
  • Support for Docker's release of Kubernetes in Docker Enterprise Edition 2.x
  • +
  • Support for older versions of NimbleOS (all versions below 5.x)
  • +
  • Support for Windows Containers
  • +
+

The managed plugin does provide a simple way to manage HPE Nimble Storage on your Docker hosts using Docker's interface to install and manage the plugin.

+

Installation

+

Plugin privileges

+

In order to create connections, attach devices and mount file systems, the plugin requires more privileges than a standard application container. These privileges are enumerated during installation. These permissions need to be granted for the plugin to operate correctly.

+

Plugin "nimble" is requesting the following privileges:
+ - network: [host]
+ - mount: [/dev]
+ - mount: [/run/lock]
+ - mount: [/sys]
+ - mount: [/etc]
+ - mount: [/var/lib]
+ - mount: [/var/run/docker.sock]
+ - mount: [/sbin/iscsiadm]
+ - mount: [/lib/modules]
+ - mount: [/usr/lib64]
+ - allow-all-devices: [true]
+ - capabilities: [CAP_SYS_ADMIN CAP_SYS_MODULE CAP_MKNOD]
+

+

Host configuration and installation

+

Setting up the plugin varies between Linux distributions. The following workflows have been tested using a Nimble iSCSI group array at 192.168.171.74 with PROVIDER_USERNAME admin and PROVIDER_PASSWORD admin:

+

These procedures require root privileges.

+

Red Hat 7.5+, CentOS 7.5+:

+

yum install -y iscsi-initiator-utils device-mapper-multipath
+docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0
+docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin
+docker plugin enable nimble
+systemctl daemon-reload
+systemctl enable iscsid multipathd
+systemctl start iscsid multipathd
+

+

Ubuntu 16.04 LTS and Ubuntu 18.04 LTS:

+

apt-get install -y open-iscsi multipath-tools xfsprogs
+modprobe xfs
+sed -i"" -e "\$axfs" /etc/modules
+docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0
+docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable nimble
+systemctl daemon-reload
+systemctl restart open-iscsi multipath-tools
+

+

Debian 9.x (stable):

+

apt-get install -y open-iscsi multipath-tools xfsprogs
+modprobe xfs
+sed -i"" -e "\$axfs" /etc/modules
+docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0
+docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin iscsiadm.source=/usr/bin/iscsiadm glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable nimble
+systemctl daemon-reload
+systemctl restart open-iscsi multipath-tools
+

+

NOTE: To use the plugin on Fibre Channel environments use the PROTOCOL=FC environment variable.

+

Making changes

+

The docker plugin set command can only be used on the plugin if it is disabled. To disable the plugin, use the docker plugin disable command. For example:

+

docker plugin disable nimble
+

+

List of parameters which are supported to be settable by the plugin.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
PROVIDER_IPHPE Nimble Storage array ip""
PROVIDER_USERNAMEHPE Nimble Storage array username""
PROVIDER_PASSWORDHPE Nimble Storage array password""
PROVIDER_REMOVEUnassociate Plugin from HPE Nimble Storage arrayfalse
LOG_LEVELLog level of the plugin (info, debug, or trace)debug
SCOPEScope of the plugin (global or local)global
PROTOCOLScsi protocol supported by the plugin (iscsi or fc)iscsi
+

Security considerations

+

The HPE Nimble Storage credentials are visible to any user who can execute docker plugin inspect nimble. To limit credential visibility, the variables should be unset after certificates have been generated. The following set of steps can be used to accomplish this:

+

Add the credentials

+

docker plugin set PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin
+

+

Start the plugin

+

docker plugin enable nimble
+

+

Stop the plugin

+

docker plugin disable nimble
+

+

Remove the credentials

+

docker plugin set nimble PROVIDER_USERNAME="true" PROVIDER_PASSWORD="true"
+

+

Start the plugin

+

docker plugin enable nimble
+

+
+

Note

+

Certificates are stored in /etc/hpe-storage/ on the host and will be preserved across plugin updates.

+
+

In the event of reassociating the plugin with a different HPE Nimble Storage group, certain procedures need to be followed:

+

Disable the plugin

+

docker plugin disable nimble
+

+

Set new paramters

+

docker plugin set nimble PROVIDER_REMOVE=true
+

+

Enable the plugin

+

docker plugin enable nimble
+

+

Disable the plugin

+

docker plugin disable nimble
+

+

The plugin is now ready for re-configuration

+

docker plugin set nimble PROVIDER_IP=< New IP address > PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin PROVIDER_REMOVE=false
+

+

Note: The PROVIDER_REMOVE=false parameter must be set if the plugin ever has been unassociated from a HPE Nimble Storage group.

+

Configuration files and options

+

The configuration directory for the plugin is /etc/hpe-storage on the host. Files in this directory are preserved between plugin upgrades. The /etc/hpe-storage/volume-driver.json file contains three sections, global, defaults and overrides. The global options are plugin runtime parameters and doesn't have any end-user configurable keys at this time.

+

The defaults map allows the docker host administrator to set default options during volume creation. The docker user may override these default options with their own values for a specific option.

+

The overrides map allows the docker host administrator to enforce a certain option for every volume creation. The docker user may not override the option and any attempt to do so will be silently ignored.

+

These maps are essential to discuss with the HPE Nimble Storage administrator. A common pattern is that a default protection template is selected for all volumes to fulfill a certain data protection policy enforced by the business it's serving. Another useful option is to override the volume placement options to allow a single HPE Nimble Storage array to provide multi-tenancy for docker environments.

+

Note: defaults and overrides are dynamically read during runtime while global changes require a plugin restart.

+

Below is an example /etc/hpe-storage/volume-driver.json outlining the above use cases:

+

{
+  "global": {
+    "nameSuffix": ".docker"
+  },
+  "defaults": {
+    "description": "Volume provisioned by Docker",
+    "protectionTemplate": "Retain-90Daily"
+  },
+  "overrides": {
+    "folder": "docker-prod"
+  }
+}
+

+

For an exhaustive list of options use the help option from the docker CLI:

+

$ docker volume create -d nimble -o help
+Nimble Storage Docker Volume Driver: Create Help
+Create or Clone a Nimble Storage backed Docker Volume or Import an existing
+Nimble Volume or Clone of a Snapshot into Docker.
+
+Universal options:
+  -o mountConflictDelay=X X is the number of seconds to delay a mount request
+                           when there is a conflict (default is 0)
+
+Create options:
+  -o sizeInGiB=X          X is the size of volume specified in GiB
+  -o size=X               X is the size of volume specified in GiB (short form
+                          of sizeInGiB)
+  -o fsOwner=X            X is the user id and group id that should own the
+                           root directory of the filesystem, in the form of
+                           [userId:groupId]
+  -o fsMode=X             X is 1 to 4 octal digits that represent the file
+                           mode to be applied to the root directory of the
+                           filesystem
+  -o description=X        X is the text to be added to volume description
+                          (optional)
+  -o perfPolicy=X         X is the name of the performance policy (optional)
+                          Performance Policies: Exchange 2003 data store,
+                          Exchange log, Exchange 2007 data store,
+                          SQL Server, SharePoint,
+                          Exchange 2010 data store, SQL Server Logs,
+                          SQL Server 2012, Oracle OLTP,
+                          Windows File Server, Other Workloads,
+                          DockerDefault, General, MariaDB,
+                          Veeam Backup Repository,
+                          Backup Repository
+
+  -o pool=X               X is the name of pool in which to place the volume
+                          Needed with -o folder (optional)
+  -o folder=X             X is the name of folder in which to place the volume
+                          Needed with -o pool (optional).
+  -o encryption           indicates that the volume should be encrypted
+                          (optional, dedupe and encryption are mutually
+                          exclusive)
+  -o thick                indicates that the volume should be thick provisioned
+                          (optional, dedupe and thick are mutually exclusive)
+  -o dedupe               indicates that the volume should be deduplicated
+  -o limitIOPS=X          X is the IOPS limit of the volume. IOPS limit should
+                          be in range [256, 4294967294] or -1 for unlimited.
+  -o limitMBPS=X          X is the MB/s throughput limit for this volume. If
+                          both limitIOPS and limitMBPS are specified, limitMBPS
+                          must not be hit before limitIOPS
+  -o destroyOnRm          indicates that the Nimble volume (including
+                          snapshots) backing this volume should be destroyed
+                          when this volume is deleted
+  -o syncOnUnmount        only valid with "protectionTemplate", if the
+                          protectionTemplate includes a replica destination,
+                          unmount calls will snapshot and transfer the last
+                          delta to the destination. (optional)
+ -o protectionTemplate=X  X is the name of the protection template (optional)
+                          Protection Templates: General, Retain-90Daily,
+                          Retain-30Daily,
+                          Retain-48Hourly-30Daily-52Weekly
+
+Clone options:
+  -o cloneOf=X            X is the name of Docker Volume to create a clone of
+  -o snapshot=X           X is the name of the snapshot to base the clone on
+                          (optional, if missing, a new snapshot is created)
+  -o createSnapshot       indicates that a new snapshot of the volume should be
+                          taken and used for the clone (optional)
+  -o destroyOnRm          indicates that the Nimble volume (including
+                          snapshots) backing this volume should be destroyed
+                          when this volume is deleted
+  -o destroyOnDetach      indicates that the Nimble volume (including
+                          snapshots) backing this volume should be destroyed
+                          when this volume is unmounted or detached
+
+Import Volume options:
+  -o importVol=X          X is the name of the Nimble Volume to import
+  -o pool=X               X is the name of the pool in which the volume to be
+                          imported resides (optional)
+  -o folder=X             X is the name of the folder in which the volume to be
+                          imported resides (optional)
+  -o forceImport          forces the import of the volume.  Note that
+                          overwrites application metadata (optional)
+  -o restore              restores the volume to the last snapshot taken on the
+                          volume (optional)
+  -o snapshot=X           X is the name of the snapshot which the volume will
+                          be restored to, only used with -o restore (optional)
+  -o takeover             indicates the current group will takeover the
+                          ownership of the Nimble volume and volume collection
+                          (optional)
+  -o reverseRepl          reverses the replication direction so that writes to
+                          the Nimble volume are replicated back to the group
+                          where it was replicated from (optional)
+
+Import Clone of Snapshot options:
+  -o importVolAsClone=X   X is the name of the Nimble Volume and Nimble
+                          Snapshot to clone and import
+  -o snapshot=X           X is the name of the Nimble snapshot to clone and
+                          import (optional, if missing, will use the most
+                          recent snapshot)
+  -o createSnapshot       indicates that a new snapshot of the volume should be
+                          taken and used for the clone (optional)
+  -o pool=X               X is the name of the pool in which the volume to be
+                          imported resides (optional)
+  -o folder=X             X is the name of the folder in which the volume to be
+                          imported resides (optional)
+  -o destroyOnRm          indicates that the Nimble volume (including
+                          snapshots) backing this volume should be destroyed
+                          when this volume is deleted
+  -o destroyOnDetach      indicates that the Nimble volume (including
+                          snapshots) backing this volume should be destroyed
+                          when this volume is unmounted or detached
+

+

Node fencing

+

If you are considering using any Docker clustering technologies for your Docker deployment, it is important to understand the fencing mechanism used to protect data. Attaching the same Docker Volume to multiple containers on the same host is fully supported. Mounting the same volume on multiple hosts is not supported.

+

Docker does not provide a fencing mechanism for nodes that have become disconnected from the Docker Swarm. This results in the isolated nodes continuing to run their containers. When the containers are rescheduled on a surviving node, the Docker Engine will request that the Docker Volume(s) be mounted. In order to prevent data corruption, the Docker Volume Plugin will stop serving the Docker Volume to the original node before mounting it on the newly requested node.

+

During a mount request, the Docker Volume Plugin inspects the ACR (Access Control Record) on the volume. If the ACR does not match the initiator requesting to mount the volume, the ACR is removed and the volume taken offline. The volume is now fenced off and other nodes are unable to access any data in the volume.

+

The volume then receives a new ACR matching the requesting initiator, and it is mounted for the container requesting the volume. This is done because the volumes are formatted with XFS, which is not a clustered filesystem and can be corrupted if the same volume is mounted to multiple hosts.

+

The side effect of a fenced node is that I/O hangs indefinitely, and the initiator is rejected during login. If the fenced node rejoins the Docker Swarm using Docker SwarmKit, the swarm tries to shut down the services that were rescheduled elsewhere to maintain the desired replica set for the service. This operation will also hang indefinitely waiting for I/O.

+

We recommend running a dedicated Docker host that does not host any other critical applications besides the Docker Engine. Doing this supports a safe way to reboot a node after a grace period and have it start cleanly when a hung task is detected. Otherwise, the node can remain in the hung state indefinitely.

+

The following kernel parameters control the system behavior when a hung task is detected:

+

# Reset after these many seconds after a panic
+kernel.panic = 5
+
+# I do consider hung tasks reason enough to panic
+kernel.hung_task_panic = 1
+
+# To not panic in vain, I'll wait these many seconds before I declare a hung task
+kernel.hung_task_timeout_secs = 150
+

+

Add these parameters to the /etc/sysctl.d/99-hung_task_timeout.conf file and reboot the system.

+
+

Important

+

Docker SwarmKit declares a node as failed after five (5) seconds. Services are then rescheduled and up and running again in less than ten (10) seconds. The parameters noted above provide the system a way to manage other tasks that may appear to be hung and avoid a system panic.

+
+

Usage

+

These are some basic examples on how to use the HPE Nimble Storage Volume Plugin for Docker.

+

Create a Docker Volume

+

Using docker volume create.

+
+

Note

+

The plugin applies a set of default options when you create new volumes unless you override them using the volume create -o key=value option flags.

+
+

Create a Docker volume with a custom description:

+

docker volume create -d nimble -o description="My volume description" --name myvol1 
+

+

(Optional) Inspect the new volume:

+

docker volume inspect myvol1
+

+

(Optional) Attach the volume to an interactive container.

+

docker run -it --rm -v myvol1:/data bash
+

+

The volume is mounted inside the container on /data.

+

Clone a Docker Volume

+

Use the docker volume create command with the cloneOf option to clone a Docker volume to a new Docker volume.

+

Clone the Docker volume named myvol1 to a new Docker volume named myvol1-clone.

+

docker volume create -d nimble -o cloneOf=myvol1 --name=myvol1-clone
+

+

(Optional) Select a snapshot on which to base the clone.

+

docker volume create -d nimble -o snapshot=mysnap1 -o cloneOf=myvol1 --name=myvol2-clone
+

+

Provisioning Docker Volumes

+

There are several ways to provision a Docker volume depending on what tools are used:

+
    +
  • Docker Engine (CLI)
  • +
  • Docker Compose file with either Docker UCP or Docker Engine
  • +
+

The Docker Volume plugin leverages the existing Docker CLI and APIs, therefor all native Docker tools may be used to provision a volume.

+
+

Note

+

The plugin applies a set of default volume create options. Unless you override the default options using the volume option flags, the defaults are applied when you create volumes. For example, the default volume size is 10GiB.

+
+

Config file volume-driver.json, which is stored at /etc/hpe-storage/volume-driver.json:

+

{
+    "global":   {},
+    "defaults": {
+                 "sizeInGiB":"10",
+                 "limitIOPS":"-1",
+                 "limitMBPS":"-1",
+                 "perfPolicy": "DockerDefault",
+                },
+    "overrides":{}
+}
+

+

Import a volume to Docker

+

Before you begin +Take the volume you want to import offline before importing it. For information about how to take a volume offline, refer to either the CLI Administration Guide or the GUI Administration Guide on HPE InfoSight. Use the create command with the importVol option to import an HPE Nimble Storage volume to Docker and name it.

+

Import the HPE Nimble Storage volume named mynimblevol as a Docker volume named myvol3-imported.

+

docker volume create –d nimble -o importVol=mynimblevol --name=myvol3-imported
+

+

Import a volume snapshot to Docker

+

Use the create command with the importVolAsClone option to import a HPE Nimble Storage volume snapshot as a Docker volume. Optionally, specify a particular snapshot on the HPE Nimble Storage volume using the snapshot option.

+

Import the HPE Nimble Storage snapshot aSnapshot on the volume importMe as a Docker volume named importedSnap.

+

docker volume create -d nimble -o importVolAsClone=mynimblevol -o snapshot=mysnap1 --name=myvol4-clone
+

+
+

Note

+

If no snapshot is specified, the latest snapshot on the volume is imported.

+
+

Restore an offline Docker Volume with specified snapshot

+

It's important that the volume to be restored is in an offline state on the array.

+

If the volume snapshot is not specified, the last volume snapshot is used.

+

docker volume create -d nimble -o importVol=myvol1.docker -o forceImport -o restore -o snapshot=mysnap1 --name=myvol1-restored
+

+

List volumes

+

List Docker volumes.

+

docker volume ls
+DRIVER                     VOLUME NAME
+nimble:latest              myvol1
+nimble:latest              myvol1-clone
+

+

Remove a Docker Volume

+

When you remove volumes from Docker control they are set to the offline state on the array. Access to the volumes and related snapshots using the Docker Volume plugin can be reestablished.

+
+

Note

+

To delete volumes from the HPE Nimble Storage array using the remove command, the volume should have been created with a -o destroyOnRm flag.

+
+

Important: Be aware that when this option is set to true, volumes and all related snapshots are deleted from the group, and can no longer be accessed by the Docker Volume plugin.

+

Remove the volume named myvol1.

+

docker volume rm myvol1
+

+

Uninstall

+

The plugin can be removed using the docker plugin rm command. This command will not remove the configuration directory (/etc/hpe-storage/).

+

docker plugin rm nimble
+

+
+

Important

+

If this is the last plugin to reference the Nimble Group and to completely remove the configuration directory, follow the steps as below

+
+

docker plugin set nimble PROVIDER_REMOVE=true
+docker plugin enable nimble
+docker plugin rm nimble
+

+

Troubleshooting

+

The config directory is at /etc/hpe-storage/. When a plugin is installed and enabled, the Nimble Group certificates are created in the config directory.

+

ls -l /etc/hpe-storage/
+total 16
+-r-------- 1 root root 1159 Aug  2 00:20 container_provider_host.cert
+-r-------- 1 root root 1671 Aug  2 00:20 container_provider_host.key
+-r-------- 1 root root 1521 Aug  2 00:20 container_provider_server.cert
+

+

Additionally there is a config file volume-driver.json present at the same location. This file can be edited +to set default parameters for create volumes for docker.

+

Log file location

+

The docker plugin logs are located at /var/log/hpe-docker-plugin.log

+

Upgrade from older plugins

+

Upgrading from 2.5.1 or older plugins, please follow the below steps

+

Ubuntu 16.04 LTS and Ubuntu 18.04 LTS:

+

docker plugin disable nimble:latest –f
+docker plugin upgrade --grant-all-permissions  nimble store/hpestorage/nimble:3.0.0 --skip-remote-check
+docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin glibc_libs.source=/lib/x86_64-linux-gnu
+docker plugin enable nimble:latest
+

+

Red Hat 7.5+, CentOS 7.5+, Oracle Enterprise Linux 7.5+ and Fedora 28+:

+

docker plugin disable nimble:latest –f
+docker plugin upgrade --grant-all-permissions  nimble store/hpestorage/nimble:3.0.0 --skip-remote-check
+docker plugin enable nimble:latest
+

+
+

Important

+

In Swarm Mode, drain the existing running containers to the node where the plugin is upgraded.

+
+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/ezmeral/img/hpecp-old.png b/ezmeral/img/hpecp-old.png new file mode 100644 index 00000000..62bfab59 Binary files /dev/null and b/ezmeral/img/hpecp-old.png differ diff --git a/ezmeral/install.html b/ezmeral/install.html new file mode 100644 index 00000000..c9a68c7c --- /dev/null +++ b/ezmeral/install.html @@ -0,0 +1,307 @@ + + + + + + + + + + + + + + + + + + Install HPE CSI Driver - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • HPE EZMERAL RUNTIME ENTERPRISE »
  • +
  • Install HPE CSI Driver
  • +
  • +
  • +
+
+
+
+
+ +

Introduction

+

HPE Ezmeral Runtime Enterprise deploys and manages open source upstream Kubernetes clusters through its management console. It's also capable of importing foreign Kubernetes clusters. This guide describes the necessary steps to perform a successful deployment of the HPE CSI Driver for Kubernetes on HPE Ezmeral Runtime Enterprise managed clusters.

+

Prerequisites

+

It's up to the HPE Ezmeral Runtime Enterprise administrator who deploys Kubernetes clusters to ensure that the particular version of the CSI driver (i.e v2.0.0) is supported with the following components.

+
    +
  • HPE Ezmeral Runtime Enterprise worker node host operating system
  • +
  • HPE Ezmeral Runtime Enterprise deployed Kubernetes cluster version
  • +
+

Examine the table found in the Compatibility and Support section of the CSI driver overview. Particular Container Storage Providers may have additional prerequisites.

+

Version 5.4.0 and later

+

In Ezmeral 5.4.0 and later, an exception has been added to the "hpe-storage" Namespace. Proceed to Installation and disregard any steps outlined in this guide.

+
+

Note

+

If the HPE CSI Driver built-in NFS Server Provisioner will be used, an exception needs to be granted to the "hpe-nfs" Namespace.

Run:
kubectl patch --type json -p '[{"op": "add", "path": "/spec/match/excludedNamespaces/-", "value": "hpe-nfs"}]' k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-privileged-container

+
+

Version 5.3.0

+

The CSI driver needs privileged access to the worker nodes to attach and detach storage devices. By default, an admission controller prevents all user deployed workloads access to the host filesystem. An exception needs to be created for the "hpe-storage" Namespace.

+

As a Kubernetes cluster admin, run the following.

+

kubectl create ns hpe-storage
+kubectl patch --type json -p '[{"op":"add","path":"/spec/unrestrictedFsMountNamespaces/-","value":"hpe-storage"}]' hpecpconfigs/hpecp-global-config -n hpecp
+

+
+

Caution

+

In theory you may use any Namespace name desired. This might change in a future release and it's encouraged to use "hpe-storage" for compatibility with upcoming releases of HPE Ezmeral Runtime Enterprise.

+
+

By not performing this configuration change, the following events will be seen on the CSI controller ReplicaSet or CSI node DaemonSet trying to schedule Pods.

+

Events:
+  Type     Reason        Age                    From                   Message
+  ----     ------        ----                   ----                   -------
+  Warning  FailedCreate  2m4s (x17 over 7m32s)  replicaset-controller  Error creating: admission webhook "soft-validate.hpecp.hpe.com" denied the request: Hostpath ("/") referenced in volume is not valid for this namespace because of FS Mount protections.
+

+

Version 5.2.0 or earlier

+

Early versions of HPE Ezmeral Runtime Enterprise (HPE Container Platform, HPE Ezmeral Container Platform) contained a checkbox to deploy the HPE CSI Driver for Kubernetes. This method is not supported. Make sure clusters are deployed without the checkbox ticked.

+

+

Continue with Installation.

+

Installation

+

Any method to install the HPE CSI Driver for Kubernetes on an HPE Ezmeral Runtime Enterprise managed Kubernetes cluster is supported. Helm is strongly recommended. Make sure to deploy the CSI driver to the "hpe-storage" Namespace for future compatibility.

+ +
+

Important

+

In some deployments of Ezmeral the kubelet root has been relocated, in those circumstances you'll see errors similar to: Error: command mount failed with rc=32 err=mount: /dev/mapper/mpathh is already mounted or /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-uuid busy /dev/mapper/mpathh is already mounted on /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-uuid. In this case it's recommended to install the CSI driver using Helm with the --set kubeletRootDir=/var/lib/docker/kubelet parameter.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/ezmeral_container_platform/install.html b/ezmeral_container_platform/install.html new file mode 100644 index 00000000..f7ef41af --- /dev/null +++ b/ezmeral_container_platform/install.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/flexvolume_driver/container_provider/index.html b/flexvolume_driver/container_provider/index.html new file mode 100644 index 00000000..3ac80c8b --- /dev/null +++ b/flexvolume_driver/container_provider/index.html @@ -0,0 +1,1192 @@ + + + + + + + + + + + + + + + + + + Index - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Index
  • +
  • +
  • +
+
+
+
+
+ +
+

Expired content

+

The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.

+
+

Overview

+

The HPE Volume Driver for Kubernetes FlexVolume Plugin leverages HPE Nimble Storage or HPE Cloud Volumes to provide scalable and persistent storage for stateful applications.

+
+

Important

+

Using HPE Nimble Storage with Kubernetes 1.13 and newer, please use the HPE CSI Driver for Kubernetes.

+
+

Source code and developer documentation is available in the hpe-storage/flexvolume-driver GitHub repo.

+ +

Platform requirements

+

The FlexVolume driver supports multiple backends that are based on a "container provider" architecture. Currently, Nimble and Cloud Volumes are supported.

+

HPE Nimble Storage Platform Requirements

+ + + + + + + + + + + + + + + + + + + + + + + +
DriverHPE Nimble Storage VersionRelease NotesBlog
v3.0.05.0.8.x and 5.1.3.x onwardsv3.0.0HPE Storage Tech Insiders
v3.1.05.0.8.x and 5.1.3.x onwardsv3.1.0
+
    +
  • OpenShift Container Platform 3.9, 3.10 and 3.11.
  • +
  • Kubernetes 1.10 and above.
  • +
  • Redhat/CentOS 7.5+
  • +
  • Ubuntu 16.04/18.04 LTS
  • +
+

Note: Synchronous replication (Peer Persistence) is not supported by the HPE Volume Driver for Kubernetes FlexVolume Plugin.

+

HPE Cloud Volumes Platform Requirements

+ + + + + + + + + + + + + + + +
DriverRelease NotesBlog
v3.1.0v3.1.0Using HPE Cloud Volumes with Amazon EKS
+
    +
  • Amazon EKS 1.12/1.13
  • +
  • Microsoft Azure AKS 1.12/1.13
  • +
  • US regions only
  • +
+
+

Important

+

HPE Cloud Volumes was introduced in HPE CSI Driver for Kubernetes v1.5.0. Make sure to check if your cloud is supported by the CSI driver first.

+
+

Deploying to Kubernetes

+

The recommended way to deploy and manage the HPE Volume Driver for Kubernetes FlexVolume Plugin is to use Helm. Please see the co-deployments repository for further information.

+

Use the following steps for a manual installation.

+

Step 1: Create a secret

+

HPE Nimble Storage

+

Replace the password string (YWRtaW4=) below with a base64 encoded version of your password and replace the backend with your array IP address and save it as hpe-secret.yaml.

+

apiVersion: v1
+kind: Secret
+metadata:
+  name: hpe-secret
+  namespace: kube-system
+stringData:
+  backend: 192.168.1.1
+  username: admin
+  protocol: "iscsi"
+data:
+  # echo -n "admin" | base64
+  password: YWRtaW4=
+

+

HPE Cloud Volumes

+

Replace the username and password strings (YWRtaW4=) with a base64 encoded version of your HPE Cloud Volumes "access_key" and "access_secret". Also, replace the backend with HPE Cloud Volumes portal fully qualified domain name (FQDN) and save it as hpe-secret.yaml.

+

apiVersion: v1
+kind: Secret
+metadata:
+  name: hpe-secret
+  namespace: kube-system
+stringData:
+  backend: cloudvolumes.hpe.com
+  protocol: "iscsi"
+  serviceName: cv-cp-svc
+  servicePort: "8080"
+data:
+  # echo -n "<my very confidential access key>" | base64
+  username: YWRtaW4=
+  # echo -n "<my very confidential secret key>" | base64
+  password: YWRtaW4=
+

+

Create the secret:

+

kubectl create -f hpe-secret.yaml
+secret "hpe-secret" created
+

+

You should now see the HPE secret in the kube-system namespace.

+

kubectl get secret/hpe-secret -n kube-system
+NAME                  TYPE                                  DATA      AGE
+hpe-secret            Opaque                                5         3s
+

+

Step 2. Create a ConfigMap

+

The ConfigMap is used to set and tweak defaults for both the FlexVolume driver and Dynamic Provisioner.

+

HPE Nimble Storage

+

Edit the below default parameters as required for FlexVolume driver and save it as hpe-config.yaml.

+

kind: ConfigMap
+apiVersion: v1
+metadata:
+  name: hpe-config
+  namespace: kube-system
+data:
+  volume-driver.json: |-
+    {
+      "global":   {},
+      "defaults": {
+                 "limitIOPS":"-1",
+                 "limitMBPS":"-1",
+                 "perfPolicy": "Other"
+                },
+      "overrides":{}
+    }
+

+
+

Tip

+

Please see Advanced for more volume-driver.json configuration options.

+
+

HPE Cloud Volumes

+

Edit the below parameters as required with your public cloud info and save it as hpe-config.yaml.

+

kind: ConfigMap
+apiVersion: v1
+metadata:
+  name: hpe-config
+  namespace: kube-system
+data:
+  volume-driver.json: |-
+    {
+      "global": {
+                "snapPrefix": "BaseFor",
+                "initiators": ["eth0"],
+                "automatedConnection": true,
+                "existingCloudSubnet": "10.1.0.0/24",
+                "region": "us-east-1",
+                "privateCloud": "vpc-data",
+                "cloudComputeProvider": "Amazon AWS"
+      },
+      "defaults": {
+                "limitIOPS": 1000,
+                "fsOwner": "0:0",
+                "fsMode": "600",
+                "description": "Volume provisioned by the HPE Volume Driver for Kubernetes FlexVolume Plugin",
+                "perfPolicy": "Other",
+                "protectionTemplate": "twicedaily:4",
+                "encryption": true,
+                "volumeType": "PF",
+                "destroyOnRm": true
+      },
+      "overrides": {
+      }
+    }
+

+

Create the ConfigMap:

+

kubectl create -f hpe-config.yaml
+configmap/hpe-config created
+

+

Step 3. Deploy the FlexVolume driver and dynamic provisioner

+

Deploy the driver as a DaemonSet and the dynamic provisioner as a Deployment.

+

HPE Nimble Storage

+

Version 3.0.0:

+

kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-nimble-storage/hpe-flexvolume-driver-v3.0.0.yaml
+

+

Version 3.1.0:

+

kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-nimble-storage/hpe-flexvolume-driver-v3.1.0.yaml
+

+

HPE Cloud Volumes

+

Container-Provider Service:

+

kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-cp-v3.1.0.yaml
+

+

The FlexVolume driver have different declarations depending on the Kubernetes distribution.

+

Amazon EKS:

+

kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-aws-flexvolume-driver-v3.1.0.yaml
+

+

Microsoft Azure AKS:

+

kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-azure-flexvolume-driver-v3.1.0.yaml
+

+

Generic:

+

kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-flexvolume-driver-v3.1.0.yaml
+

+
+

Note

+

The declarations for HPE Volume Driver for Kubernetes FlexVolume Plugin can be found in the co-deployments repository.

+
+

Check to see all hpe-flexvolume-driver Pods (one per compute node) and the hpe-dynamic-provisioner Pod are running.

+

kubectl get pods -n kube-system
+NAME                                            READY   STATUS    RESTARTS   AGE
+hpe-flexvolume-driver-2rdt4                     1/1     Running   0          45s
+hpe-flexvolume-driver-md562                     1/1     Running   0          44s
+hpe-flexvolume-driver-x4k96                     1/1     Running   0          44s
+hpe-dynamic-provisioner-59f9d495d4-hxh29        1/1     Running   0          24s
+

+

For HPE Cloud Volumes, check that hpe-cv-cp pod is running as well.

+

kubectl get pods -n kube-system -l=app=cv-cp
+NAME                                READY   STATUS    RESTARTS   AGE
+hpe-cv-cp-2rdt4                     1/1     Running   0          45s
+

+

Using

+

Get started using the FlexVolume driver by setting up StorageClass, PVC API objects. See Using for examples.

+

These instructions are provided as an example on how to use the HPE Volume Driver for Kubernetes FlexVolume Plugin with a HPE Nimble Storage Array.

+

The below YAML declarations are meant to be created with kubectl create. Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this:

+

kubectl create -f-
+< paste the YAML >
+^D (CTRL + D)
+

+
+

Tip

+

Some of the examples supported by the HPE Volume Driver for Kubernetes FlexVolume Plugin are available for HPE Nimble Storage or HPE Cloud Volumes in the GitHub repo.

+
+

To get started, create a StorageClass API object referencing the hpe-secret and defining additional (optional) StorageClass parameters:

+

Sample StorageClass

+

Sample storage classes can be found for HPE Nimble Storage and HPE Cloud Volumes.

+
+

Hint

+

See StorageClass parameters for HPE Nimble Storage and HPE Clound Volumes for a comprehensive overview.

+
+

Test and verify volume provisioning

+

Create a StorageClass with volume parameters as required.

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: sc-nimble
+provisioner: hpe.com/nimble
+parameters:
+  description: "Volume from HPE FlexVolume driver"
+  perfPolicy: "Other Workloads"
+  limitIOPS: "76800"
+

+

Create a PersistentVolumeClaim. This makes sure a volume is created and provisioned on your behalf:

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: pvc-nimble
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 10Gi
+  storageClassName: sc-nimble
+

+

Check that a new PersistentVolume is created based on your claim:

+

kubectl get pv
+NAME                                            CAPACITY     ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
+sc-nimble-13336da3-7ca3-11e9-826c-00505693581f  10Gi         RWO            Delete           Bound    default/pvc-nimble  sc-nimble               3s
+

+

The above output means that the FlexVolume driver successfully provisioned a new volume and bound to the requesting PVC to a new PV. The volume is not attached to any node yet. It will only be attached to a node if a workload is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified container:

+

kind: Pod
+apiVersion: v1
+metadata:
+  name: pod-nimble
+spec:
+  containers:
+    - name: pod-nimble-con-1
+      image: nginx
+      command: ["bin/sh"]
+      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+      volumeMounts:
+        - name: export1
+          mountPath: /data
+    - name: pod-nimble-cont-2
+      image: debian
+      command: ["bin/sh"]
+      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
+      volumeMounts:
+        - name: export1
+          mountPath: /data
+  volumes:
+    - name: export1
+      persistentVolumeClaim:
+        claimName: pvc-nimble
+

+

Check if the pod is running successfully:

+

kubectl get pod pod-nimble
+NAME         READY   STATUS    RESTARTS   AGE
+pod-nimble   2/2     Running   0          2m29s
+

+

Use case specific examples

+

This StorageClass examples help guide combinations of options when provisioning volumes.

+

Data protection

+

This StorageClass creates thinly provisioned volumes with deduplication turned on. It will also apply the Performance Policy "SQL Server" along with a Protection Template. The Protection Template needs to be defined on the array.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+    name: oltp-prod
+provisioner: hpe.com/nimble
+parameters:
+    thick: "false"
+    dedupe: "true"
+    perfPolicy: "SQL Server"
+    protectionTemplate: "Retain-48Hourly-30Daily-52Weekly"
+

+

Clone and throttle for devs

+

This StorageClass will create clones of a "production" volume and throttle the performance of each clone to 1000 IOPS. When the PVC is deleted, it will be permanently deleted from the backend array.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: oltp-dev-clone-of-prod
+provisioner: hpe.com/nimble
+parameters:
+  limitIOPS: "1000"
+  cloneOf: "oltp-prod-1adee106-110b-11e8-ac84-00505696c45f"
+  destroyOnRm: "true"
+

+

Clone a non-containerized volume

+

This StorageClass will clone a standard backend volume (without container metadata on it) from a particular pool on the backend.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: import-clone-legacy-prod
+rovisioner: hpe.com/nimble
+parameters:
+  pool: "flash"
+  importVolAsClone: "production-db-vol"
+  destroyOnRm: "true"
+

+

Import (cutover) a volume

+

This StorageClass will import an existing Nimble volume to Kubernetes. The source volume needs to be offline for the import to succeed.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: import-clone-legacy-prod
+provisioner: hpe.com/nimble
+parameters:
+  pool: "flash"
+  importVol: "production-db-vol"
+

+

Using overrides

+

The HPE Dynamic Provisioner for Kubernetes understands a set of annotation keys a user can set on a PVC. If the corresponding keys exists in the list of the allowOverrides key in the StorageClass, the end-user can tweak certain aspects of the provisioning workflow. This opens up for very advanced data services.

+

StorageClass object:

+

apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: my-sc
+provisioner: hpe.com/nimble
+parameters:
+  description: "Volume provisioned by StorageClass my-sc"
+  dedupe: "false"
+  destroyOnRm: "true"
+  perfPolicy: "Windows File Server"
+  folder: "myfolder"
+  allowOverrides: snapshot,limitIOPS,perfPolicy
+

+

PersistentVolumeClaim object:

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc
+ annotations:
+    hpe.com/description: "This is my custom description"
+    hpe.com/limitIOPS: "8000"
+    hpe.com/perfPolicy: "SQL Server"
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 10Gi
+  storageClassName: my-sc
+

+

This will create a PV of 8000 IOPS with the Performance Policy of "SQL Server" and a custom volume description.

+

Creating clones of PVCs

+

Using a StorageClass to clone a PV is practical when there's needs to clone across namespaces (for example from prod to test or stage). If a user wants to clone any arbitrary volume, it becomes a bit tedious to create a StorageClass for each clone. The annotation hpe.com/CloneOfPVC allows a user to clone any PVC within a namespace.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc-clone
+ annotations:
+    hpe.com/cloneOfPVC: my-pvc
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 10Gi
+  storageClassName: my-sc
+

+

StorageClass parameters

+

This section highlights all the available StorageClass parameters that are supported.

+

HPE Nimble Storage StorageClass parameters

+

A StorageClass is used to provision or clone an HPE Nimble Storage-backed persistent volume. It can also be used to import an existing HPE Nimble Storage volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.

+

A sample StorageClass is provided.

+
+

Note

+

These are optional parameters.

+
+

Common parameters for Provisioning and Cloning

+

These parameters are mutable betweeen a parent volume and creating a clone from a snapshot.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
nameSuffixTextSuffix to append to Nimble volumes. Defaults to .docker
destroyOnRmBooleanIndicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted.
limitIOPSIntegerThe IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default).
limitMBPSIntegerThe MB/s throughput limit for the volume.
descriptionTextText to be added to the volume's description on the Nimble array.
perfPolicyTextThe name of the performance policy to assign to the volume. Default example performance policies include "Backup Repository", "Exchange 2003 data store", "Exchange 2007 data store", "Exchange 2010 data store", "Exchange log", "Oracle OLTP", "Other Workloads", "SharePoint", "SQL Server", "SQL Server 2012", "SQL Server Logs".
protectionTemplateTextThe name of the protection template to assign to the volume. Default examples of protection templates include "Retain-30Daily", "Retain-48Hourly-30aily-52Weekly", and "Retain-90Daily".
folderTextThe name of the Nimble folder in which to place the volume.
thickBooleanIndicates that the volume should be thick provisioned.
dedupeEnabledBooleanIndicates that the volume should enable deduplication.
syncOnUnmountBooleanIndicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node.
+
+

Note

+

Performance Policies, Folders and Protection Templates are Nimble specific constructs that can be created on the Nimble array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight.

+
+

Provisioning parameters

+

These parameters are immutable for clones once a volume has been created.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
fsOwneruserId:groupIdThe user id and group id that should own the root directory of the filesystem.
fsModeOctal digits1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem.
encryptionBooleanIndicates that the volume should be encrypted.
poolTextThe name of the pool in which to place the volume.
+

Cloning parameters

+

Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Nimble volume name to clone and import to Kubernetes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
cloneOfTextThe name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive.
importVolAsCloneTextThe name of the Nimble volume to clone and import. importVolAsClone and cloneOf are mutually exclusive.
snapshotTextThe name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created.
createSnapshotBooleanIndicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created.
snapshotPrefixTextA prefix to add to the beginning of the snapshot name.
+

Import parameters

+

Importing volumes to Kubernetes requires the source Nimble volume to be offline. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE Volume Driver for Kubernetes FlexVolume Plugin.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
importVolTextThe name of the Nimble volume to import.
snapshotTextThe name of the Nimble snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored.
restoreBooleanRestores the volume to the last snapshot taken on the volume.
takeoverBooleanIndicates the current group will takeover ownership of the Nimble volume and volume collection. This should be performed against a downstream replica.
reverseReplBooleanReverses the replication direction so that writes to the Nimble volume are replicated back to the group where it was replicated from.
forceImportBooleanForces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead.
+
+

Note

+

HPE Nimble Docker Volume workflows works with a 1-1 mapping between volume and volume collection.

+
+

HPE Cloud Volumes StorageClass parameters

+

A StorageClass is used to provision or clone an HPE Cloud Volumes-backed persistent volume. It can also be used to import an existing HPE Cloud Volumes volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.

+

A sample StorageClass is provided.

+
+

Note

+

These are optional parameters.

+
+

Common parameters for Provisioning and Cloning

+

These parameters are mutable betweeen a parent volume and creating a clone from a snapshot.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
nameSuffixTextSuffix to append to Cloud Volumes.
destroyOnRmBooleanIndicates the backing Cloud volume (including snapshots) should be destroyed when the PVC is deleted.
limitIOPSIntegerThe IOPS limit of the volume. The IOPS limit should be in the range 300 to 50000.
perfPolicyTextThe name of the performance policy to assign to the volume. Default example performance policies include "Other, Exchange, Oracle, SharePoint, SQL, Windows File Server".
protectionTemplateTextThe name of the protection template to assign to the volume. Default examples of protection templates include "daily:3, daily:7, daily:14, hourly:6, hourly:12, hourly:24, twicedaily:4, twicedaily:8, twicedaily:14, weekly:2, weekly:4, weekly:8, monthly:3, monthly:6, monthly:12 or none".
volumeTypeTextCloud Volume type. Supported types are PF and GPF.
+

Provisioning parameters

+

These parameters are immutable for clones once a volume has been created.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
fsOwneruserId:groupIdThe user id and group id that should own the root directory of the filesystem.
fsModeOctal digits1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem.
encryptionBooleanIndicates that the volume should be encrypted.
+

Cloning parameters

+

Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Cloud volume name to clone and import to Kubernetes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
cloneOfTextThe name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive.
importVolAsCloneTextThe name of the Cloud Volume volume to clone and import. importVolAsClone and cloneOf are mutually exclusive.
snapshotTextThe name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created.
createSnapshotBooleanIndicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created.
snapshotPrefixTextA prefix to add to the beginning of the snapshot name.
replStoreTextReplication store name. Should be used with importVolAsClone parameter to clone a replica volume
+

Import parameters

+

Importing volumes to Kubernetes requires the source Cloud volume to be not attached to any nodes. All previous Access Control Records will be stripped from the volume when put under control of the HPE Volume Driver for Kubernetes FlexVolume Plugin.

+ + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
importVolTextThe name of the Cloud volume to import.
forceImportBooleanForces the import of a volume that is provisioned by another K8s cluster but not attached to any nodes.
+

Diagnostics

+

This section outlines a few troubleshooting steps for the HPE Volume Driver for Kubernetes Plugin. This product is supported by HPE, please consult with your support organization (Nimble, Cloud Volumes etc) prior attempting any configuration changes.

+

Troubleshooting FlexVolume driver

+

The FlexVolume driver is a binary executed by the kubelet to perform mount/unmount/attach/detach operations as workloads request storage resources. The binary relies on communicating with a socket on the host where the volume plugin responsible for the MUAD operations perform control-plane or data-plane operations against the backend system hosting the actual volumes.

+

Locations

+

The driver has a configuration file where certain defaults can be tweaked to accommodate a certain behavior. Under normal circumstances, this file does not need any tweaking.

+

The name and the location of the binary varies based on Kubernetes distribution (the default 'exec' path) and what backend driver is being used. In a typical scenario, using Nimble, this is expected:

+
    +
  • Binary: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble
  • +
  • Config file: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble.json
  • +
+

Override defaults

+

By default, it contains only the path to the socket file for the volume plugin:

+

{
+    "dockerVolumePluginSocketPath": "/etc/hpe-storage/nimble.sock"
+}
+

+

Valid options for the FlexVolume driver can be inspected by executing the binary on the host with the config argument:

+

/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble config
+Error processing option 'logFilePath' - key:logFilePath not found
+Error processing option 'logDebug' - key:logDebug not found
+Error processing option 'supportsCapabilities' - key:supportsCapabilities not found
+Error processing option 'stripK8sFromOptions' - key:stripK8sFromOptions not found
+Error processing option 'createVolumes' - key:createVolumes not found
+Error processing option 'listOfStorageResourceOptions' - key:listOfStorageResourceOptions not found
+Error processing option 'factorForConversion' - key:factorForConversion not found
+Error processing option 'enable1.6' - key:enable1.6 not found
+
+Driver=nimble Version=v2.5.1-50fbff2aa14a693a9a18adafb834da33b9e7cc89
+Current Config:
+  dockerVolumePluginSocketPath = /etc/hpe-storage/nimble.sock
+           stripK8sFromOptions = true
+                   logFilePath = /var/log/dory.log
+                      logDebug = false
+                 createVolumes = false
+                     enable1.6 = false
+           factorForConversion = 1073741824
+  listOfStorageResourceOptions = [size sizeInGiB]
+          supportsCapabilities = true
+

+

An example tweak could be to enable debug logging and enable support for Kubernetes 1.6 (which we don't officially support). The config file would then end up like this:

+

{
+    "dockerVolumePluginSocketPath": "/etc/hpe-storage/nimble.sock",
+    "logDebug": true,
+    "enable1.6": true
+}
+

+

Execute the binary again (nimble config) to ensure the parameters and config file gets parsed correctly. Since the config file is read on each FlexVolume operation, no restart of anything is needed.

+

See Advanced for more parameters for the driver.json file.

+

Connectivity

+

To verify the FlexVolume binary can actually communicate with the backend volume plugin, issue a faux mount request:

+

/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble mount no/op '{"name":"myvol1"}'
+

+

If the FlexVolume driver can successfully communicate with the volume plugin socket:

+

{"status":"Failure","message":"configured to NOT create volumes"}
+

+

In the case of any other output, check if the backend volume plugin is alive with curl:

+

curl --unix-socket /etc/hpe-storage/nimble.sock -d '{}' http://localhost/VolumeDriver.Capabilities
+

+

It should output:

+

{"capabilities":{"scope":"global"},"Err":""}
+

+

FlexVolume and dynamic provisioner driver logs

+

Log files associated with the HPE Volume Driver for Kubernetes FlexVolume Plugin logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution. Some of the logs on the host are persisted which follow standard logrotate policies.

+

FlexVolume driver logs:

+

kubectl logs -f daemonset.apps/hpe-flexvolume-driver -n kube-system
+

+

The logs are persisted at /var/log/hpe-docker-plugin.log and /var/log/dory.log

+

Dynamic Provisioner logs:

+

kubectl logs -f  deployment.apps/hpe-dynamic-provisioner -n kube-system
+

+

The logs are persisted at /var/log/hpe-dynamic-provisioner.log

+

Log Collector

+

Log collector script hpe-logcollector.sh can be used to collect diagnostic logs using kubectl

+

Download the script as follows:

+

curl -O https://raw.githubusercontent.com/hpe-storage/flexvolume-driver/master/hpe-logcollector.sh
+chmod 555 hpe-logcollector.sh
+

+

Usage:

+

./hpe-logcollector.sh -h
+Diagnostic Script to collect HPE Storage logs using kubectl
+
+Usage:
+     hpe-logcollector.sh [-h|--help][--node-name NODE_NAME][-n|--namespace NAMESPACE][-a|--all]
+Where
+-h|--help                  Print the Usage text
+--node-name NODE_NAME      where NODE_NAME is kubernetes Node Name needed to collect the
+                           hpe diagnostic logs of the Node
+-n|--namespace NAMESPACE   where NAMESPACE is namespace of the pod deployment. default is kube-system
+-a|--all                   collect diagnostic logs of all the nodes.If
+                           nothing is specified logs would be collected
+                           from all the nodes
+

+

Advanced Configuration

+

This section describes some of the advanced configuration steps available to tweak behavior of the HPE Volume Driver for Kubernetes FlexVolume Plugin.

+

Set defaults at the compute node level

+

During normal operations, defaults are set in either the ConfigMap or in a StorageClass itself. The picking order is:

+
    +
  • StorageClass
  • +
  • ConfigMap
  • +
  • driver.json
  • +
+

Please see Diagnostics to locate the driver for your particular environment. Add this object to the configuration file, nimble.json, for example:

+

{
+    "defaultOptions": [{"option1": "value1"}, {"option2": "value2"}]
+}
+

+

Where option1 and option2 are valid backend volume plugin create options.

+
+

Note

+

It's highly recommended to control defaults with StorageClass API objects or the ConfigMap.

+
+

Global options

+

Each driver supports setting certain "global" options in the ConfigMap. Some options are common, some are driver specific.

+

Common

+ + + + + + + + + + + + + + + + + + + + +
ParameterStringDescription
volumeDirTextRoot directory on the host to mount the volumes. This parameter needs correlation with the podsmountdir path in the volumeMounts stanzas of the deployment.
logDebugBooleanTurn on debug logging, set to false by default.
+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/flexvolume_driver/dory/img/dory.png b/flexvolume_driver/dory/img/dory.png new file mode 100644 index 00000000..a4b19d11 Binary files /dev/null and b/flexvolume_driver/dory/img/dory.png differ diff --git a/flexvolume_driver/dory/index.html b/flexvolume_driver/dory/index.html new file mode 100644 index 00000000..093234c2 --- /dev/null +++ b/flexvolume_driver/dory/index.html @@ -0,0 +1,251 @@ + + + + + + + + + + + + + + + + + + Index - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Index
  • +
  • +
  • +
+
+
+
+
+ +
+

Expired content

+

The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.

+
+

Introduction

+

The Open Source project Dory was designed in 2017 to transition Docker Volume plugins to be used with Kubernetes. Dory is the shim between the FlexVolume exec calls to the Docker Volume API.

+

Dory

+

The main repository is not currently maintained and the most up-to-date version lives in the HPE Volume Driver for Kubernetes FlexVolume Plugin repository where Dory is packaged as a privileged DaemonSet to support HPE storage products. There may be other forks associated with other Docker Volume plugins out there.

+
+

Why is the driver called Dory?

+

Dory speaks whale!

+
+

Dynamic Provisioning

+

As the FlexVolume Plugin doesn't provide any dynamic provisioning, HPE designed a provisioner to work with Docker Volume plugins as well, Doryd, to have a complete solution for Docker Volume plugins. It's run as a Deployment and monitor PVC requests.

+

FlexVolume Plugin in Kubernetes

+

According to the Kubernetes SIG storage community, the FlexVolume Plugin interface will continue to be supported.

+

Move to CSI

+

HPE encourages using the available CSI drivers for Kubernetes 1.13 and newer where available.

+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/flexvolume_driver/hpe_3par_primera_installer/index.html b/flexvolume_driver/hpe_3par_primera_installer/index.html new file mode 100644 index 00000000..fe976bc4 --- /dev/null +++ b/flexvolume_driver/hpe_3par_primera_installer/index.html @@ -0,0 +1,1071 @@ + + + + + + + + + + + + + + + + + + Index - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Index
  • +
  • +
  • +
+
+
+
+
+ +
+

Expired content

+

The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.

+
+

Overview

+

The HPE 3PAR and Primera Volume Plug-in for Docker leverages Ansible to deploy the 3PAR/Primera driver for Kubernetes in order to provide scalable and persistent storage for stateful applications.

+
+

Important

+

Using HPE 3PAR/Primera Storage with Kubernetes 1.15 and newer, please use the HPE CSI Driver for Kubernetes.

+
+

Source code is available in the hpe-storage/python-hpedockerplugin GitHub repo.

+ +

Refer to the SPOCK page for the latest support matrix for HPE 3PAR and HPE Primera Volume Plug-in for Docker.

+

Platform requirements

+

The HPE 3PAR/Primera FlexVolume driver supports multiple backends that are based on a "container provider" architecture.

+

HPE 3PAR/Primera Storage Platform Requirements

+

Ensure that you have reviewed the System Requirements.

+ + + + + + + + + + + + + + + +
DriverHPE 3PAR/Primera OS VersionRelease Notes
v3.3.13PAR OS: 3.3.1 MU5+
Primera OS: 4.0+
v3.3.1
+
    +
  • OpenShift Container Platform 3.9, 3.10 and 3.11.
  • +
  • Kubernetes 1.10 and above.
  • +
  • Redhat/CentOS 7.5+
  • +
+

Note: Refer to SPOCK page for the latest support matrix for HPE 3PAR and HPE Primera Volume Plug-in for Docker.

+

Deploying to Kubernetes

+

The recommended way to deploy and manage the HPE 3PAR and Primera Volume Plug-in for Kubernetes is to use Ansible.

+

Use the following steps to configure Ansible to perform the installation.

+

Step 1: Install Ansible

+

Ensure that Ansible (v2.5 to v2.8) is installed. For more information, see Ansible Installation Guide.

+

NOTE: Ansible only needs to be installed on the machine that will be performing the deployment. Ansible does not need to be installed on your Kubernetes cluster.

+

$ pip install ansible
+$ ansible --version
+ansible 2.7.12
+

+
Ansible: Connecting to remote nodes
+

Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.

+
Ansible: Check your SSH connections
+

Confirm that you can connect using SSH to all the nodes in your Kubernetes cluster using the same username. If necessary, add your public SSH key to the authorized_keys file on those systems.

+

Step 2: Clone the Github repository

+

$ cd ~
+$ git clone https://github.com/hpe-storage/python-hpedockerplugin
+

+

Step 3: Modify the Ansible hosts file

+

Modify the hosts file to define the Kubernetes/OpenShift Master and Worker nodes. Also define where the HPE etcd cluster will be deployed, this can be done within the cluster or on external servers.

+
$ vi python-hpedockerplugin/ansible_3par_docker_plugin/hosts
+
[masters]
+192.168.1.51
+
+[workers]
+192.168.1.52
+192.168.1.53
+
+[etcd]
+192.168.1.51
+192.168.1.52
+192.168.1.53
+
+

Step 4: Create the properties file

+

Create the properties/plugin_configuration_properties.yml based on your HPE 3PAR/Primera Storage array configuration.

+

$ vi python-hpedockerplugin/ansible_3par_docker_plugin/properties/plugin_configuration_properties.yml
+

+

NOTE: Some of the properties are mandatory and must be specified in the properties file while others are optional.

+

INVENTORY:
+  DEFAULT:
+#Mandatory Parameters--------------------------------------------------------------------------------
+
+    # Specify the port to be used by HPE 3PAR plugin etcd cluster
+    host_etcd_port_number: 23790
+    # Plugin Driver - iSCSI
+    hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
+    hpe3par_ip: <3par_array_IP>
+    hpe3par_username: <3par_user>
+    hpe3par_password: <3par_password>
+    #Specify the 3PAR port - 8080 default
+    hpe3par_port: 8080
+    hpe3par_cpg: <cpg_name>
+
+    # Plugin version - Required only in DEFAULT backend
+    volume_plugin: hpestorage/legacyvolumeplugin:3.3.1
+    # Dory installer version - Required for Openshift/Kubernetes setup
+    # Supported versions are dory_installer_v31, dory_installer_v32
+    dory_installer_version: dory_installer_v32
+
+#Optional Parameters--------------------------------------------------------------------------------
+
+    logging: DEBUG
+    hpe3par_snapcpg: FC_r6
+    #hpe3par_iscsi_chap_enabled: True
+    use_multipath: True
+    #enforce_multipath: False
+    #vlan_tag: True
+
+

+

Available Properties Parameters

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PropertyMandatoryDefault ValueDescription
hpedockerplugin_driverYesNo default valueISCSI/FC driver (hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver/hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver)
hpe3par_ipYesNo default valueIP address of 3PAR array
hpe3par_usernameYesNo default value3PAR username
hpe3par_passwordYesNo default value3PAR password
hpe3par_portYes80803PAR HTTP_PORT port
hpe3par_cpgYesNo default valuePrimary user CPG
volume_pluginYesNo default valueName of the docker volume image (only required with DEFAULT backend)
encryptor_keyNoNo default valueEncryption key string for 3PAR password
loggingNoINFOLog level
hpe3par_debugNoNo default value3PAR log level
suppress_requests_ssl_warningNoTrueSuppress request SSL warnings
hpe3par_snapcpgNohpe3par_cpgSnapshot CPG
hpe3par_iscsi_chap_enabledNoFalseISCSI chap toggle
hpe3par_iscsi_ipsNoNo default valueComma separated iscsi port IPs (only required if driver is ISCSI based)
use_multipathNoFalseMutltipath toggle
enforce_multipathNoFalseForcefully enforce multipath
ssh_hosts_key_fileNo/root/.ssh/known_hostsPath to hosts key file
quorum_witness_ipNoNo default valueQuorum witness IP
mount_prefixNoNo default valueAlternate mount path prefix
hpe3par_iscsi_ipsNoNo default valueComma separated iscsi IPs. If not provided, all iscsi IPs will be read from the array and populated in hpe.conf
vlan_tagNoFalsePopulates the iscsi_ips which are vlan tagged, only applicable if hpe3par_iscsi_ips is not specified
replication_deviceNoNo default valueReplication backend properties
dory_installer_versionNodory_installer_v32Required for Openshift/Kubernetes setup. Dory installer version, supported versions are dory_installer_v31, dory_installer_v32
hpe3par_server_ip_poolYesNo default valueThis parameter is specific to fileshare. It can be specified as a mix of range of IPs and individual IPs delimited by comma. Each range or individual IP must be followed by the corresponding subnet mask delimited by semi-colon E.g.: IP-Range:Subnet-Mask,Individual-IP:SubnetMask
hpe3par_default_fpg_sizeNoNo default valueThis parameter is specific to fileshare. Default fpg size, It must be in the range 1TiB to 64TiB. If not specified here, it defaults to 16TiB
+
+

Hint

+

Refer to Replication Support for details on enabling Replication support.

+
+
File Persona Example Configuration
+

#Mandatory Parameters for Filepersona---------------------------------------------------------------
+  DEFAULT_FILE:
+    # Specify the port to be used by HPE 3PAR plugin etcd cluster
+    host_etcd_port_number: 23790
+    # Plugin Driver - File driver
+    hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_file.HPE3PARFileDriver
+    hpe3par_ip: 192.168.2.50
+    hpe3par_username: demo_user
+    hpe3par_password: demo_pass
+    hpe3par_cpg: demo_cpg
+    hpe3par_port: 8080
+    hpe3par_server_ip_pool: 192.168.98.3-192.168.98.10:255.255.192.0 
+#Optional Parameters for Filepersona----------------------------------------------------------------
+    hpe3par_default_fpg_size: 16
+

+
Multiple Backend Example Configuration
+

INVENTORY:
+  DEFAULT:
+#Mandatory Parameters-------------------------------------------------------------------------------
+
+    # Specify the port to be used by HPE 3PAR plugin etcd cluster
+    host_etcd_port_number: 23790
+    # Plugin Driver - iSCSI
+    hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
+    hpe3par_ip: 192.168.1.50
+    hpe3par_username: 3paradm
+    hpe3par_password: 3pardata
+    hpe3par_port: 8080
+    hpe3par_cpg: FC_r6
+
+    # Plugin version - Required only in DEFAULT backend
+    volume_plugin: hpestorage/legacyvolumeplugin:3.3.1
+    # Dory installer version - Required for Openshift/Kubernetes setup
+    # Supported versions are dory_installer_v31, dory_installer_v32
+    dory_installer_version: dory_installer_v32
+
+#Optional Parameters--------------------------------------------------------------------------------
+
+    #ssh_hosts_key_file: '/root/.ssh/known_hosts'
+    logging: DEBUG
+    #hpe3par_debug: True
+    #suppress_requests_ssl_warning: True
+    #hpe3par_snapcpg: FC_r6
+    #hpe3par_iscsi_chap_enabled: True
+    #use_multipath: False
+    #enforce_multipath: False
+    #vlan_tag: True
+
+#Additional Backend (Optional)----------------------------------------------------------------------
+
+  3PAR1:
+#Mandatory Parameters-------------------------------------------------------------------------------
+
+    # Specify the port to be used by HPE 3PAR plugin etcd cluster
+    host_etcd_port_number: 23790
+    # Plugin Driver - Fibre Channel
+    hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver
+    hpe3par_ip: 192.168.2.50
+    hpe3par_username: 3paradm
+    hpe3par_password: 3pardata
+    hpe3par_port: 8080
+    hpe3par_cpg: FC_r6
+
+#Optional Parameters--------------------------------------------------------------------------------
+
+    #ssh_hosts_key_file: '/root/.ssh/known_hosts'
+    logging: DEBUG
+    #hpe3par_debug: True
+    #suppress_requests_ssl_warning: True
+    hpe3par_snapcpg: FC_r6
+    #use_multipath: False
+    #enforce_multipath: False
+

+

Step 5: Run the Ansible playbook

+

$ cd python-hpedockerplugin/ansible_3par_docker_plugin/
+$ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml
+

+

Step 6: Verify the installation

+
    +
  • Once playbook has completed successfully, the PLAY RECAP should look like below
  • +
+

Installer should not show any failures and PLAY RECAP should look like below
+
+PLAY RECAP ***********************************************************************
+<Master1-IP>           : ok=85   changed=33   unreachable=0    failed=0
+<Master2-IP>           : ok=76   changed=29   unreachable=0    failed=0
+<Master3-IP>           : ok=76   changed=29   unreachable=0    failed=0
+<Worker1-IP>           : ok=70   changed=27   unreachable=0    failed=0
+<Worker2-IP>           : ok=70   changed=27   unreachable=0    failed=0
+localhost              : ok=9    changed=3    unreachable=0    failed=0
+

+
    +
  • Verify plugin installation on all nodes.
  • +
+

$ docker ps | grep plugin; ssh <Master2-IP> "docker ps | grep plugin";ssh <Master3-IP> "docker ps | grep plugin";ssh <Worker1-IP> "docker ps | grep plugin";ssh <Worker2-IP> "docker ps | grep plugin"
+51b9d4b1d591        hpestorage/legacyvolumeplugin:3.3.1          "/bin/sh -c ./plugin…"   12 minutes ago      Up 12 minutes         plugin_container
+a43f6d8f5080        hpestorage/legacyvolumeplugin:3.3.1          "/bin/sh -c ./plugin…"   12 minutes ago      Up 12 minutes         plugin_container
+a88af9f46a0d        hpestorage/legacyvolumeplugin:3.3.1          "/bin/sh -c ./plugin…"   12 minutes ago      Up 12 minutes         plugin_container
+5b20f16ab3af        hpestorage/legacyvolumeplugin:3.3.1          "/bin/sh -c ./plugin…"   12 minutes ago      Up 12 minutes         plugin_container
+b0813a22cbd8        hpestorage/legacyvolumeplugin:3.3.1          "/bin/sh -c ./plugin…"   12 minutes ago      Up 12 minutes         plugin_container
+

+
    +
  • Verify the HPE FlexVolume driver Pod is running.
  • +
+

kubectl get pods -n kube-system | grep doryd
+NAME                                            READY   STATUS    RESTARTS   AGE
+kube-storage-controller-doryd-7dd487b446-xr6q2  1/1     Running   0          45s
+

+

Using

+

Get started using the FlexVolume driver by setting up StorageClass, PVC API objects. See Using for examples.

+

These instructions are provided as an example on how to use the HPE 3PAR/Primera Volume Plug-in with a HPE 3PAR/Primera Storage Array.

+

The below YAML declarations are meant to be created with kubectl create. Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this:

+

kubectl create -f-
+< paste the YAML >
+^D (CTRL + D)
+

+
+

Tip

+

Some of the examples supported by the HPE 3PAR/Primera FlexVolume driver are available for HPE 3PAR/Primera Storage in the GitHub repo.

+
+

To get started, create a StorageClass API object referencing the hpe-secret and defining additional (optional) StorageClass parameters:

+

Sample StorageClass

+

Sample storage classes can be found for HPE 3PAR/Primera Storage.

+

Test and verify volume provisioning

+

Create a StorageClass with volume parameters as required. Change the CPG per your requirements.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: sc-gold
+provisioner: hpe.com/hpe
+parameters:
+  provisioning: 'full'
+  cpg: 'SSD_r6'
+  fsOwner: '1001:1001'
+

+

Create a PersistentVolumeClaim. This makes sure a volume is created and provisioned on your behalf:

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: sc-gold-pvc
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 25Gi
+  storageClassName: sc-gold
+

+

Check that a new PersistentVolume is created based on your claim:

+

$ kubectl get pv
+NAME                                              CAPACITY     ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
+sc-gold-pvc-13336da3-7ca3-11e9-826c-00505692581f  25Gi         RWO            Delete           Bound    default/pvc-gold    sc-gold                 3s
+

+

The above output means that the FlexVolume driver successfully provisioned a new volume and bound to the requesting PVC to a new PV. The volume is not attached to any node yet. It will only be attached to a node if a workload is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified container:

+

kind: Pod
+apiVersion: v1
+metadata:
+  name: pod-nginx
+spec:
+  containers:
+  - name: nginx
+    image: nginx
+    ports:
+    - containerPort: 80
+      name: "http-server"
+    volumeMounts:
+    - name: export
+      mountPath: "/usr/share/nginx/html"
+  volumes:
+    - name: export
+      persistentVolumeClaim:
+        claimName: sc-gold-pvc
+

+

Check if the pod is running successfully:

+

$ kubectl get pod pod-nginx
+NAME          READY   STATUS    RESTARTS   AGE
+pod-nginx     1/1     Running   0          2m29s
+

+

Use case specific examples

+

This StorageClass examples help guide combinations of options when provisioning volumes.

+

Snapshot a volume

+

This StorageClass will create a snapshot of a "production" volume.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: sc-gold-snap-mongo
+provisioner: hpe.com/hpe
+parameters:
+  virtualCopyOf: "sc-mongo-10dc1195-779b-11e9-b787-0050569bb07c"
+

+

Clone a volume

+

This StorageClass will create clones of a "production" volume.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: sc-gold-clone
+provisioner: hpe.com/hpe
+parameters:
+  cloneOf: "sc-gold-2a82c9e5-6213-11e9-8d53-0050569bb07c"
+

+

Replicate a containerized volume

+

This StorageClass will add a standard backend volume to a 3PAR Replication Group. If the replicationGroup specified does not exist, the plugin will create one. See Replication Support for more details on configuring replication.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: sc-mongodb-replicated
+provisioner: hpe.com/hpe
+parameters:
+  provisioning: 'full'
+  replicationGroup: 'mongodb-app1'
+

+

Import (cutover) a volume

+

This StorageClass will import an existing 3PAR/Primera volume to Kubernetes. The source volume needs to be offline for the import to succeed.

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: import-clone-legacy-prod
+provisioner: hpe.com/hpe
+parameters:
+  importVol: "production-db-vol"
+

+

Using overrides

+

The HPE Dynamic Provisioner for Kubernetes (doryd) understands a set of annotation keys a user can set on a PVC. If the corresponding keys exists in the list of the allowOverrides key in the StorageClass, the end-user can tweak certain aspects of the provisioning workflow. This opens up for very advanced data services.

+

StorageClass object:

+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: sc-gold
+provisioner: hpe.com/hpe
+parameters:
+  provisioning: 'full'
+  cpg: 'SSD_r6'
+  fsOwner: '1001:1001'
+  allowOverrides: provisioning,compression,cpg,fsOwner
+

+

PersistentVolumeClaim object:

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc
+ annotations:
+    hpe.com/provisioning: "thin"
+    hpe.com/cpg: "FC_r6"
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 25Gi
+  storageClassName: sc-gold
+

+

This will create a PV thinly provisioned using the FC-r6 cpg.

+

Upgrade

+

In order to upgrade the driver, simply modify the ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml used for the initial deployment and modify hpestorage/legacyvolumeplugin to the latest image from docker hub.

+

For example:

+

    volume_plugin: hpestorage/legacyvolumeplugin:3.3
+
+    Change to:
+    volume_plugin: hpestorage/legacyvolumeplugin:3.3.1
+

+

Re-run the installer.

+

$ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml
+

+

Uninstall

+

Run the following to uninstall the FlexVolume driver from the cluster.

+

$ cd ~
+$ cd python-hpedockerplugin/ansible_3par_docker_plugin
+$ ansible-playbook -i hosts uninstall/uninstall_hpe_3par_volume_driver.yml
+

+

StorageClass parameters

+

This section highlights all the available StorageClass parameters that are supported.

+

HPE 3PAR/Primera Storage StorageClass parameters

+

A StorageClass is used to provision or clone an HPE 3PAR\Primera Storage-backed persistent volume. It can also be used to import an existing HPE 3PAR/Primera Storage volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows.

+

A sample StorageClass is provided.

+
+

Note

+

These are optional parameters.

+
+

Common parameters for Provisioning and Cloning

+

These parameters are mutable betweeen a parent volume and creating a clone from a snapshot.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterTypeOptionsExample
sizeInteger-size: "10"
provisioningthin, full, dedupeprovisioning: "thin"
flash-cacheTexttrue, falseflash-cache: "true"
compressionbooleantrue, falsecompression: "true"
MountConflictDelayInteger-MountConflictDelay: "30"
qos-nameTextvvset nameqos-name: ""
replicationGroupText3PAR RCG namereplicationGroup: "Test-RCG"
fsOwneruserId:groupIdThe user id and group id that should own the root directory of the filesystem.
fsModeOctal digits1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem.
+

Cloning/Snapshot parameters

+

Either use cloneOf and reference a PVC in the current namespace or use virtualCopyOf and reference a 3PAR/Primera volume name to snapshot/clone and import into Kubernetes.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterTypeOptionsExample
cloneOfTextvolume namecloneOf: "<volume_name>"
virtualCopyOfTextvolume namevirtualCopyOf: "<volume_name>"
expirationHoursIntegeroption of virtualCopyOfexpirationHours: "10"
retentionHoursIntegeroption of virtualCopyOfretentionHours: "10"
+

Import parameters

+

Importing volumes to Kubernetes requires the source 3PAR/Primera volume to be offline.

+ + + + + + + + + + + + + + + + + +
ParameterTypeDescriptionExample
importVolTextvolume nameimportVol: "<volume_name>"
+

Replication Support

+

The HPE 3PAR/Primer FlexVolume driver supports array based synchronous and asynchronous replication. In order to enable replication within the FlexVolume driver, the arrays need to be properly zoned, visible to the Kubernetes cluster, and replication configured. For Peer Persistence, a quorum witness will need to be configured.

+

Once the replication is enabled at the array level, the FlexVolume driver will need to be configured.

+
+

Important

+

Replication support can be enabled during initial deployment through the plugin configuration file. In order to enable replication support post deployment, modify the plugin_configuration_properties.yml used for deployment, add the replication parameter section below, and re-run the Ansible installer.

+
+

Edit the plugin_configuration_properties.yml file and edit the Optional Replication Section.

+

INVENTORY:
+  DEFAULT:
+#Mandatory Parameters-------------------------------------------------------------------------------
+
+    # Specify the port to be used by HPE 3PAR plugin etcd cluster
+    host_etcd_port_number: 23790
+    # Plugin Driver - iSCSI
+    hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
+    hpe3par_ip: <local_3par_ip>
+    hpe3par_username: <local_3par_user>
+    hpe3par_password: <local_3par_password>
+    hpe3par_port: 8080
+    hpe3par_cpg: FC_r6
+
+    # Plugin version - Required only in DEFAULT backend
+    volume_plugin: hpestorage/legacyvolumeplugin:3.3.1
+    # Dory installer version - Required for Openshift/Kubernetes setup
+    dory_installer_version: dory_installer_v32
+
+#Optional Parameters--------------------------------------------------------------------------------
+
+    logging: DEBUG
+    hpe3par_snapcpg: FC_r6
+    use_multipath: False
+    enforce_multipath: False
+
+#Optional Replication Parameters--------------------------------------------------------------------
+    replication_device:
+      backend_id: remote_3PAR
+      #Quorum Witness required for Peer Persistence only
+      #quorum_witness_ip: <quorum_witness_ip>
+      replication_mode: synchronous
+      cpg_map: "local_CPG:remote_CPG"
+      snap_cpg_map: "local_copy_CPG:remote_copy_CPG"
+      hpe3par_ip: <remote_3par_ip>
+      hpe3par_username: <remote_3par_user>
+      hpe3par_password: <remote_3par_password>
+      hpe3par_port: 8080
+      #vlan_tag: False
+

+

Once the properties file is configured, you can proceed with the standard installation steps.

+

Diagnostics

+

This section outlines a few troubleshooting steps for the HPE 3PAR/Primera FlexVolume driver. This product is supported by HPE, please consult with your support organization prior attempting any configuration changes.

+

Troubleshooting FlexVolume driver

+

The FlexVolume driver is a binary executed by the kubelet to perform mount/unmount/attach/detach operations as workloads request storage resources. The binary relies on communicating with a socket on the host where the volume plugin responsible for the MUAD operations perform control-plane or data-plane operations against the backend system hosting the actual volumes.

+

Locations

+

The driver has a configuration file where certain defaults can be tweaked to accommodate a certain behavior. Under normal circumstances, this file does not need any tweaking.

+

The name and the location of the binary varies based on Kubernetes distribution (the default 'exec' path) and what backend driver is being used. In a typical scenario, using 3PAR/Primera, this is expected:

+
    +
  • Binary: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe
  • +
  • Config file: /etc/hpedockerplugin/hpe.conf
  • +
+

Connectivity

+

To verify the FlexVolume binary can actually communicate with the backend volume plugin, issue a faux mount request:

+

/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe mount no/op '{"name":"myvol1"}'
+

+

If the FlexVolume driver can successfully communicate with the volume plugin socket:

+

{"status":"Failure","message":"configured to NOT create volumes"}
+

+

In the case of any other output, check if the backend volume plugin is alive:

+

$ docker volume create -d hpe -o help=backends
+

+

It should output:

+

=================================
+NAME                     STATUS
+=================================
+DEFAULT                   OK
+

+

ETCD

+

To verify the etcd members on nodes.

+

$ /usr/bin/etcdctl --endpoints http://<Master1-IP>:23790 member list 
+

+

It should output:

+

b70ca254f54dd23: name=<Worker2-IP> peerURLs=http://<Worker2-IP>:23800 clientURLs=http://<Worker2-IP>:23790 isLeader=true
+236bf7d5cc7a32d4: name=<Worker1-IP> peerURLs=http://<Worker1-IP>:23800 clientURLs=http://<Worker1-IP>:23790 isLeader=false
+445e80419ae8729b: name=<Master1-IP> peerURLs=http://<Master1-IP>:23800 clientURLs=http://<Master1-IP>:23790 isLeader=false
+e340a5833e93861e: name=<Master3-IP> peerURLs=http://<Master3-IP>:23800 clientURLs=http://<Master3-IP>:23790 isLeader=false
+f5b5599d719d376e: name=<Master2-IP> peerURLs=http://<Master2-IP>:23800 clientURLs=http://<Master2-IP>:23790 isLeader=false
+

+

HPE 3PAR/Primera FlexVolume and Dynamic Provisioner driver (doryd) logs

+

Log files associated with the HPE 3PAR/Primera FlexVolume driver logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution. Some of the logs on the host are persisted which follow standard logrotate policies.

+

HPE 3PAR/Primera FlexVolume logs: (per node)

+

$ docker logs -f plugin_container
+

+

Dynamic Provisioner logs:

+

kubectl logs -f kube-storage-controller-doryd -n kube-system
+

+

The logs are persisted at /var/log/hpe-dynamic-provisioner.log

+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/flexvolume_driver/index.html b/flexvolume_driver/index.html new file mode 100644 index 00000000..9b4d7c42 --- /dev/null +++ b/flexvolume_driver/index.html @@ -0,0 +1,242 @@ + + + + + + + + + + + + + + + + + + Index - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Index
  • +
  • +
  • +
+
+
+
+
+ +
+

Expired content

+

The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within.

+
+

Legacy FlexVolume drivers

+ + +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/google9fccf50b5a142848.html b/google9fccf50b5a142848.html new file mode 100644 index 00000000..929a6c5b --- /dev/null +++ b/google9fccf50b5a142848.html @@ -0,0 +1 @@ +google-site-verification: google9fccf50b5a142848.html \ No newline at end of file diff --git a/img/csi-driver-overview.png b/img/csi-driver-overview.png new file mode 100644 index 00000000..63aed579 Binary files /dev/null and b/img/csi-driver-overview.png differ diff --git a/img/favicon.ico b/img/favicon.ico new file mode 100644 index 00000000..c78080c9 Binary files /dev/null and b/img/favicon.ico differ diff --git a/img/hpe-dev-grommet-gremlin-rockin-static.svg b/img/hpe-dev-grommet-gremlin-rockin-static.svg new file mode 100644 index 00000000..468dc5f8 --- /dev/null +++ b/img/hpe-dev-grommet-gremlin-rockin-static.svg @@ -0,0 +1,45 @@ + + + + grommet-gremlin + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/img/hpe-dev-logo-mark-anim.svg b/img/hpe-dev-logo-mark-anim.svg new file mode 100644 index 00000000..6882f931 --- /dev/null +++ b/img/hpe-dev-logo-mark-anim.svg @@ -0,0 +1,51 @@ + + + + + + + + + + + + + + + + diff --git a/img/hpe-social-og-image01.jpg b/img/hpe-social-og-image01.jpg new file mode 100644 index 00000000..d609ffbd Binary files /dev/null and b/img/hpe-social-og-image01.jpg differ diff --git a/index.html b/index.html new file mode 100644 index 00000000..6047e29d --- /dev/null +++ b/index.html @@ -0,0 +1,259 @@ + + + + + + + + + + + + + + + + + + SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Home
  • +
  • +
  • +
+
+
+
+
+ + +

HPE Storage Container Orchestrator Documentation

+

This is an umbrella documentation project for the HPE CSI Driver for Kubernetes and neighboring ecosystems for HPE primary storage including HPE Alletra Storage MP, Alletra 9000, Alletra 5000/6000, Nimble Storage, Primera and 3PAR storage systems. The documentation is tailored for IT Ops, developers and technology partners.

+

HPE CSI Driver for Kubernetes

+

Use the navigation on the left-hand side to explore the different topics. Feel free to contribute to this project but please read the contributing guidelines.

+

Use the navigation to the left. Not sure what you're looking for? → Get started!

+ + +
+

Did you know?

+

SCOD is "docs" in reverse?

+
+
+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + + diff --git a/js/html5shiv.min.js b/js/html5shiv.min.js new file mode 100644 index 00000000..1a01c94b --- /dev/null +++ b/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); diff --git a/js/jquery-3.6.0.min.js b/js/jquery-3.6.0.min.js new file mode 100644 index 00000000..c4c6022f --- /dev/null +++ b/js/jquery-3.6.0.min.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t + + + + + + + + + + + + + + + + + Overview - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Overview
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

Welcome to the "101" section of SCOD. The goal of this section is to create a learning resource for individuals who want to learn about emerging topics in a cloud native world where containers are the focal point. The content is slightly biased towards storage.

+

Mission Statement

+

We aim to provide a learning resource collection that is generic enough to comprehend nuances in the different solutions and paradigms. Hewlett Packard Enterprise Products are highly likely referenced in some examples and resources. We can therefore not claim vendor neutrality nor a Switzerland opinion. External resources are the primary learning assets used to frame certain topics.

+

Let's start the learning journey.

+ +

Cloud Native Computing

+

The term "cloud native" stems from a software development model where resources are consumed as services. Compute, network and storage consumed through APIs, CLIs and web administration interfaces. Consumption is often modeled around paying only for what is being used.

+

The applications deployed into Cloud Native Computing environments are often divided into small chunks that are operated independently, referred to as microservices. On the uprising is a broader adoption of a concept called serverless where your application runs only when called and is billed in milliseconds.

+

Many public cloud vendors provide many already cloud native applications as services on their respective clouds. An example would be to consume a SQL database as a service rather than deploying and managing it by yourself.

+

Key Attributes

+

These are some of the key elements of Cloud Native Computing.

+
    +
  • Resources are provisioned through complete self-service.
  • +
  • API first strategies to promote interoperability and collaboration.
  • +
  • Separation of concerns in microservice architectures.
  • +
  • High degree of automation of resource provisioning and deprovisioning.
  • +
  • Modern languges and frameworks.
  • +
  • Infrastructure-as-a-Service (IAAS)
  • +
+

Learning Resources

+

Curated list of learning resources for Cloud Native Computing.

+ +

Practical Exercises

+

How to get hands-on experience of Cloud Native Computing.

+
    +
  • Sign-up on any of the public clouds.
      +
    • Provision an instance and get remote access to the host OS of the instance.
    • +
    • Deploy an "as-a-service" of an application technology you're familiar with.
    • +
    • Connect a client from your instance to your provisioned service.
    • +
    • Deploy either web server or Layer-4 load-balancer to give external access to your client application.
    • +
    +
  • +
+

Cloud Native Tooling

+

Tools to interact with infrastructure and applications come in many shapes and forms. A common pattern is to learn by visually creating and deleting resources to understand an end-state. Once a pattern has been established, either APIs, 3rd party or a custom CLI is used to manage the life-cycle of the deployment in a declarative manner by manipulating RESTful APIs. Also known as Infrastructure-as-Code.

+

Key Attributes

+

These are some of the key elements of Cloud Native Computing Tooling.

+
    +
  • State stored in a Source Code Control System (SCCS).
  • +
  • Changes made to state are peer reviewed and automatically tested in non-production environments before being merged and deployed.
  • +
  • Industry standard IT automation tools are often used to implement changes. Ansible, Puppet, Salt and Chef are example tools.
  • +
  • Public clouds often provide CLIs to manage resources. These are great to prepare, inspect and test deployments with.
  • +
  • Configuration and deployment files are often written in a human and machine readable format, such as JSON, YAML or TOML.
  • +
+

Learning Resources

+

Curated list of learning resources for Cloud Native Computing Tooling.

+ +

Practical Exercises

+

How to get hands-on experience of Cloud Native Computing Tooling.

+
    +
  • Sign-up on AWS. +
  • +
+

Cloud Native Storage

+

Storage for cloud computing come in many shapes and forms. Compute instances boot off block devices provided by the IaaS through the hypervisor. More devices may be attached for application data to keep host OS and application separate. Most clouds allow these devices to be snapshotted, cloned and reattached to other instances. These block devices are normally offered with different backend media, such as flash or spinning disks. Depending on the use cases and budgets parameters may be tuned to be just right.

+

For unstructured workloads, API driven object storage is the dominant technology due to the dramatic difference in cost and simplicity vs cloud provided block storage. An object is uploaded through an API endpoint with HTTP and automatically distributed (highly configurable) to provide high durability. The URL of the object will remain static for the duration of its lifetime. The main prohibitor for object storage adoption is that existing applications relying on POSIX filesystems need to be rewritten.

+

Key Attributes

+

These are some of the key elements of Cloud Native Storage.

+
    +
  • Provisioned and attached via APIs through IaaS if using block storage.
  • +
  • Data and metadata is managed with RESTful APIs if using object. No backend to manage. Consumers use standard URLs to retrieve data.
  • +
  • Highly durable with object storage. Durability equal to a local RAID device for block storage.
  • +
  • Some cloud providers offer Filesystem-as-a-Service, normally standard NFS or CIFS.
  • +
  • Backup and recovery of application data still needs to managed like traditional storage for block. Multi-region multi-copy persistence for object storage.
  • +
+

Learning Resources

+

Curated list of learning resources for Cloud Native Storage.

+ +

Practical Exercises

+

How to get hands-on experience of Cloud Native Storage.

+
    +
  • Setup a S3 compatible object storage server or use a public cloud.
      +
    • Scality has a open source S3 server for non-production use.
    • +
    • Configure s3cmd to upload and retrieve files from a bucket.
    • +
    +
  • +
  • Analyze costs of 100TB of data for one year on Amazon S3 vs Azure Manage Disks.
  • +
+

Containers Intro

+

A container is operating system-level virtualization and has been around for quite some time. By definition, the container share the kernel of the host and relies on certain abstractions to be useful. Docker the company made the technology approachable and incredibly more convenient than any predecessor. In the simplest of forms, a container image contains a virtual filesystem that contains only the dependencies the application needs. An example would be to include the Python interpreter if you wrote a program in Python.

+

Containerized applications are primarily designed to run headless. In most cases these applications need to communicate with the outside world or allow inbound traffic depending on the application. Docker containers should be treated as transient, each instance starts in a known state and any data stored inside the virtual filesystem should be treated as ephemeral. This makes it extremely easy and convenient to upgrade and rollback a container.

+

If data is required to persist between upgrades and rollbacks of the container, it needs to be stored outside of the container mapped from the host operating system.

+

The wide adoption of containers are because they're lightweight, reproducible and run everywhere. Iterations of software delivery lifecycles may be cut down to seconds from weeks with the right processes and tools.

+

Container images are layered per change made when the container is built. Each layer has a cryptographic hash and the layer itself can be shared between multiple containers readonly. When a new container is started from an image, the container runtime creates a COW (copy-on-write) filesystem where the particular container data is stored. This is in turn very effective as you only need one copy of a layer on the host. For example, if a bunch of applications are based off a Ubuntu base image, the base image only needs to be stored once on the host.

+

Key Attributes

+

These are some of the key elements of Containers.

+
    +
  • Runs on modern architectures and operating systems. Not necessarily as a single source image.
  • +
  • Headless services (webservers, databases etc) in microservice architectures.
  • +
  • Often orchestrated on compute clusters like Kubernetes, Apache Mesos Marathon or Docker Swarm.
  • +
  • Software vendors often provide official and well tested container images for their applications.
  • +
+

Learning Resources

+

Curated list of learning resources for Containers.

+ +

Practical Exercises

+

How to get hands-on experience of Containers.

+
    +
  • Install Docker Desktop or just Docker if using Linux.
      +
    • Click through the Get Started tutorial.
    • +
    • Advanced: Run any of the images built in the tutorial on a public cloud service.
    • +
    +
  • +
+

Container Tooling

+

Most of the tooling around containers is centered around what particular container orchestrator or development environment is being utilized. Usage of the tools differ greatly depending on the role of the user. As an operator the toolkit includes both IaaS and managing the platform to perform upgrades, user management and peripheral services such as storage and ingress load balancers.

+

While many popular platforms today are based on Kubernetes, the tooling has nuances. Upstream Kubernetes uses kubectl, Red Hat OpenShift uses the OpenShift CLI, oc. With other platforms such as Rancher, nearly all management can be done through a web UI.

+

Key Attributes

+

These are some of the key elements of Container Tooling.

+
    +
  • Most tools are simple, yet powerful and follow UNIX principles of doing one thing and doing it well.
  • +
  • The docker and kubectl CLIs are the two most dominant for low level management.
  • +
  • Workload management usually relies on external tools for simplicity, such as docker-compose, kompose and helm.
  • +
  • Some platforms have ancillary tools to marry the IaaS with the cluster orchestrator. Such an example is rke for Rancher and gkectl for GKE On-Prem.
  • +
  • The public clouds have builtin container orchestrator and container management into their native CLIs, such as aws and gcloud.
  • +
  • Client side tools normally rely on environment variables and user environment configuration files that store credentials, API endpoint locations and other security aspects.
  • +
+

Learning Resources

+

Curated list of learning resources for Container Tooling.

+ +

Practical Exercises

+

How to get hands-on experience of Container Tooling.

+
    +
  • Install Docker Desktop or just Docker if using Linux.
      +
    • Build a container image of an application you understand (docker build).
    • +
    • Run the container image locally (docker run).
    • +
    • Ship it to Docker Hub (docker push).
    • +
    +
  • +
  • Create an Amazon EKS cluster or equivalent.
      +
    • Retrieve the kubeconfig file.
    • +
    • Run kubectl get nodes on your local machine.
    • +
    • Start a Pod using the container image built in previous exercise.
    • +
    +
  • +
+

Container Storage

+

Due to the ephemeral nature of a container, storage is predominantly served from the host the container is running on and is dependent on which container runtime is being used where data is stored. In the case of Docker, the overlay filesystems are under /var/lib/docker. If a certain path inside the container need to persist between upgrades, restarts on a different host or any other operation that will lose the locality of the data, the mount point needs to be replaced with a "bind" mount from the host.

+

There are also container runtime technologies that are designed to persist the entire container, effectively treating the container more like a long-lived Virtual Machine. Examples are Canonical LXD, WeaveWorks Footloose and HPE BlueData. This is particularly important for applications that rely on its projected node info to remain static throughout its entire lifecycle.

+

We can then begin to categorize containers into three main categories based on their lifecycle vs persistence needs.

+
    +
  • Stateless Containers
    + No persistence needed across restarts/upgrades/rollbacks
  • +
  • Stateful Containers
    + Require certain mountpoints to persist across restarts/upgrades/rollbacks
  • +
  • Persistent Containers
    + Require static node identity information across restarts/upgrades/rollbacks
  • +
+

Some modern Software-defined Storage solutions are offered to run alongside applications in a distributed fashion. Effectively enforcing multi-way replicas for reliability and eat into CPU and memory resources of the IaaS bill. This also introduces the dilemma of effectively locking the data into the container orchestrator and its compute nodes. Although it's convenient for developers to become self-entitled storage administrators.

+

To stay in control of the data and remain mobile, storing data outside of the container orchestrator is preferable. Many container orchestrators provide plugins for external storage, some are built-in and some are supplied and supported by the storage vendor. Public clouds provide storage drivers for their IaaS storage services directly to the container orchestrator. This is widely popular pattern we're also seeing in BYO IaaS solutions such as VMware vSphere.

+

Key Attributes

+

These are some of the key elements of Container Storage.

+
    +
  • Ephemeral storage needs to be fast and expandable as environments scale with more diverse applications.
  • +
  • Data for stateful containers is ideally stored outside of the container orchestrator, either the IaaS or external highly-available storage.
  • +
  • Persistent containers require niche storage solution tightly coupled with the container runtime and the container orchestrator or scheduler.
  • +
  • Most storage solutions provide an "access mode" often referred to as ReadWriteOnce (RWO) which only allow one Pod (in the Kubernetes case or containers from the same host access the volume. To allow multiple Pods and containers from multiple hosts, a distributed filesystem or an NFS server (widely adopted) is required to provide ReadWriteMany (RWX) access.
  • +
+

Learning Resources

+

Curated list of learning resources for Container Storage.

+ +

Practical Exercises

+

How to get hands-on experience of Container Storage.

+
    +
  • Use Docker Desktop.
      +
    • Replace a mount point in an interactive container with a mount point from the host
    • +
    +
  • +
  • Deploy a Amazon EKS or equivalent cluster.
      +
    • Create a Persistent Volume Claim.
    • +
    +
  • +
  • Run kubectl get pv -o yaml and match the Persistent Volume against the IaaS block volumes.
  • +
+

DevOps

+

There are many interpretations of what DevOps "is". A bird's eye view is that there are people, processes and tools that come together to drive business outcomes through value streams. There are many core principles that could ultimately drive the outcome and no cookie cutter solution for any given organization. Breaking down problems into small pieces and creating safe systems to work in and eliminate toil are some of those principles.

+

Agile development and lean manufacturing are both predecessors and role models for driving DevOps principles.

+

Key Attributes

+

These are some of the key elements of DevOps.

+
    +
  • Well-defined processes, safe and proportionally sized work units for each step in a value stream.
  • +
  • Autonomy through tooling backed by well-defined processes.
  • +
  • Operations, development and stakeholders unified behind common goals.
  • +
  • Continuous improvement, robust feedback loops and problem "swarming" of value streams.
  • +
  • All work in the value stream must be visible and measurable.
  • +
  • From CEO and down buy-in to prevent failure of DevOps implementations.
  • +
  • DevOps is essential to be successful when investing in a "digital transformation".
  • +
+

Learning Resources

+

Curated list of learning resources for DevOps.

+ +

Practical Exercises

+

How to get hands-on experience of DevOps.

+
    +
  • Getting practical with DevOps requires an organization and a value stream. +
  • +
+

DevOps Tooling

+

The tools in DevOps are centered around the processes and value streams that support the business. Said tools also promote visibility, openness and collaboration. Inherently following security patterns, audit trails and safety. No one person should be able to misuse one tool to cause major disturbance in a value stream without quick remediation plans.

+

Many times CI/CD (Continuous Integration, Continuous Delivery and/or Deployment) is considered synonymous with DevOps. That is both right and wrong. If the value stream inherently contains software, yes.

+

Key Attributes

+

These are some of the key elements of DevOps Tooling.

+
    +
  • Just the right amount of privileges for a particular task.
  • +
  • Issue/project tracking, kanban, source code control, CI/CD, logging and reporting are essential.
  • +
  • Visibility and traceability is a key element, no work should be hidden. By person or machine.
  • +
+

Learning Resources

+

Curated list of learning resources for DevOps Tooling.

+ +

Practical Exercises

+

How to get hands-on experience of DevOps Tooling.

+ +

The common denominator across these platforms is the observability and the ability to limit scope of controls through Role-based Access Control (RBAC). Ensuring the tasks are well-defined, automated, scoped and safe to operate.

+

DevOps Storage

+

There aren't any particular storage paradigms (file/block/object) that are associated with DevOps. It's the implementation of the application and how it consumes storage that we vaguely may associate with DevOps. It's more of the practice that the right security controls are in place and whomever needs storage resource are fully self serviced. Human or machine.

+

Key Attributes

+

These are some of the key elements of DevOps Storage.

+
    +
  • API driven through RBAC. Ensuring automation may put in place for the endpoint or person that needs access to the resource.
  • +
  • Rich data management. If a value stream only needs a low performing read-only view of a certain dataset, resources supporting the value stream should only have read-only access with performance constrains.
  • +
  • Agile and mobile. At will, data should be made available for a certain application or resource for its purpose. Whether it's in the public cloud, on-prem or as-a-service through safe and secure automation.
  • +
+

Learning Resources

+

Curated list of learning resources for DevOps Storage.

+ +

Practical Exercises

+

How to get hands-on experience of DevOps Storage.

+
    +
  • Familiarize yourself with a storage system's RESTful API and automation capabilities.
      +
    • Deploy an Ansible Tower trial.
    • +
    • Write an Ansible playbook that creates a storage resource on said system.
    • +
    • Create a job in Ansible Tower with the playbook and make it available to a restricted user.
    • +
    +
  • +
+

Summary

+

If you have any suggestions or comments, head over to GitHub and file a PR or leave an issue.

+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/learn/csi_primitives/index.html b/learn/csi_primitives/index.html new file mode 100644 index 00000000..3c32e794 --- /dev/null +++ b/learn/csi_primitives/index.html @@ -0,0 +1,354 @@ + + + + + + + + + + + + + + + + + + Introduction to CSI Primitives - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEARN »
  • +
  • Introduction to CSI Primitives
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

This tutorial was presented at KubeCon North America 2020 Virtual. Content is relevant up to Kubernetes 1.19.

+

Presentation

+ +

Watch on YouTube

+

Hands-on Labs

+

These are the Asciinema cast files used in the demo. If there's something in the demo you're particularly interested in, copy the text content from these embedded players.

+

Lab 1: Install a CSI driver

+ + + +

Lab 2: Dynamic provisioning

+ + + +

Lab 3: Deploy a StatefulSet

+ + + +

Lab 4: Create VolumeSnapshots

+ + + +

Lab 5: Clone from VolumeSnapshots

+ + + +

Lab 6: Clone from PVC

+ + + +

Lab 7: Restore from VolumeSnapshots

+ + + +

Lab 8: Using Raw Block Storage

+ + + +

Lab 9: Install Rook to leverage Raw Block Storage

+ + + +

Lab 10: Using Ephemeral Local Volumes

+ + + +

Lab 11: Using Generic Ephemeral Volumes

+ + + +

Additional resources

+

Source files for the Asciinema cast files and slide deck is available on GitHub.

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/learn/csi_workshop/img/hack-shack.png b/learn/csi_workshop/img/hack-shack.png new file mode 100644 index 00000000..6e0d64c3 Binary files /dev/null and b/learn/csi_workshop/img/hack-shack.png differ diff --git a/learn/csi_workshop/index.html b/learn/csi_workshop/index.html new file mode 100644 index 00000000..8207a453 --- /dev/null +++ b/learn/csi_workshop/index.html @@ -0,0 +1,269 @@ + + + + + + + + + + + + + + + + + + Interactive CSI Workshop - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEARN »
  • +
  • Interactive CSI Workshop
  • +
  • +
  • +
+
+
+
+
+ + +

+

Welcome to the Hack Shack!

+

The recorded CSI workshop available in the Video Gallery is now available on-demand, as a self-paced and interactive workshop hosted by the HPE Developer Community.

+

All you have to do is register here.

+

A string of e-mails will setup your own sandbox to perform the exercises at your own pace. The environment will have a time restriction before resetting but you should have plenty of time to complete the workshop exercises.

+

During the workshop, you'll discover the basics of the Container Storage Interface (CSI) on Kubernetes. Here is a glance at what is being covered:

+
    +
  • Discover StorageClasses
  • +
  • Create and assign a PersistentVolumeClaim to a workload
  • +
  • Resize a PersistentVolumeClaim
  • +
  • Expose a raw block device to a Pod
  • +
  • Create a VolumeSnapshot from a VolumeSnapshotClass
  • +
  • Clone PersistentVolumeClaims from an existing claim or a VolumeSnapshot
  • +
  • Declare an ephemeral inline volume for a Pod
  • +
  • Annotate PersistentVolumeClaims to leverage StorageClass overrides
  • +
  • Transparently provision an NFS server with the HPE CSI Driver and using the ReadWriteMany access mode
  • +
+

When completed, please fill out the survey and let us know how we did!

+

Happy Hacking!

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/learn/introduction_to_containers/index.html b/learn/introduction_to_containers/index.html new file mode 100644 index 00000000..d3306859 --- /dev/null +++ b/learn/introduction_to_containers/index.html @@ -0,0 +1,293 @@ + + + + + + + + + + + + + + + + + + For HPE partners:<br />   Introduction to Containers - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEARN »
  • +
  • For HPE partners:
       Introduction to Containers
  • +
  • +
  • +
+
+
+
+
+ +

Interactive learning path

+

The Storage Education team at HPE has put together an interactive learning path to introduce field engineers, architects and account executives to Docker and Kubernetes. The course material has an angle to help understand the role of storage in the world of containers. It's a great starting point if you're new to containers.

+

Course 2-4 contains interactive labs in an immersive environment with downloadable lab guides that can be used outside of the lab environment.

+

It's recommended to take the courses in order.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AudienceCourse nameDuration (estimated)
1AE and SAContainers and market opportunity20 minutes
2AE and SAIntroduction to containers30 minutes
3Technical AE and SAIntroduction to Docker45 minutes
4Technical AE and SAIntroduction to Kubernetes45 minutes
+
+

Important

+

All courses require a HPE Passport account, either partner or employee.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/learn/persistent_storage/img/cluster.png b/learn/persistent_storage/img/cluster.png new file mode 100644 index 00000000..abe65cef Binary files /dev/null and b/learn/persistent_storage/img/cluster.png differ diff --git a/learn/persistent_storage/img/container.png b/learn/persistent_storage/img/container.png new file mode 100644 index 00000000..5629124a Binary files /dev/null and b/learn/persistent_storage/img/container.png differ diff --git a/learn/persistent_storage/img/dashboard.png b/learn/persistent_storage/img/dashboard.png new file mode 100644 index 00000000..508f2f99 Binary files /dev/null and b/learn/persistent_storage/img/dashboard.png differ diff --git a/learn/persistent_storage/img/dashboard_success.png b/learn/persistent_storage/img/dashboard_success.png new file mode 100644 index 00000000..df0cb01b Binary files /dev/null and b/learn/persistent_storage/img/dashboard_success.png differ diff --git a/learn/persistent_storage/img/kubernetes_cluster.png b/learn/persistent_storage/img/kubernetes_cluster.png new file mode 100644 index 00000000..771fce7d Binary files /dev/null and b/learn/persistent_storage/img/kubernetes_cluster.png differ diff --git a/learn/persistent_storage/img/master.png b/learn/persistent_storage/img/master.png new file mode 100644 index 00000000..fa84323b Binary files /dev/null and b/learn/persistent_storage/img/master.png differ diff --git a/learn/persistent_storage/img/namespaces.png b/learn/persistent_storage/img/namespaces.png new file mode 100644 index 00000000..73366d4a Binary files /dev/null and b/learn/persistent_storage/img/namespaces.png differ diff --git a/learn/persistent_storage/img/node.png b/learn/persistent_storage/img/node.png new file mode 100644 index 00000000..58800516 Binary files /dev/null and b/learn/persistent_storage/img/node.png differ diff --git a/learn/persistent_storage/img/persistent_volumes.png b/learn/persistent_storage/img/persistent_volumes.png new file mode 100644 index 00000000..793bcb0b Binary files /dev/null and b/learn/persistent_storage/img/persistent_volumes.png differ diff --git a/learn/persistent_storage/img/pod.png b/learn/persistent_storage/img/pod.png new file mode 100644 index 00000000..df635ae7 Binary files /dev/null and b/learn/persistent_storage/img/pod.png differ diff --git a/learn/persistent_storage/img/welcome-nginx.png b/learn/persistent_storage/img/welcome-nginx.png new file mode 100644 index 00000000..cb6a1845 Binary files /dev/null and b/learn/persistent_storage/img/welcome-nginx.png differ diff --git a/learn/persistent_storage/img/wordpress.png b/learn/persistent_storage/img/wordpress.png new file mode 100644 index 00000000..f9ae23fb Binary files /dev/null and b/learn/persistent_storage/img/wordpress.png differ diff --git a/learn/persistent_storage/img/wsl_terminal.png b/learn/persistent_storage/img/wsl_terminal.png new file mode 100644 index 00000000..e4cb0eb5 Binary files /dev/null and b/learn/persistent_storage/img/wsl_terminal.png differ diff --git a/learn/persistent_storage/img/wsl_terminal2.png b/learn/persistent_storage/img/wsl_terminal2.png new file mode 100644 index 00000000..fd7896c0 Binary files /dev/null and b/learn/persistent_storage/img/wsl_terminal2.png differ diff --git a/learn/persistent_storage/img/wsl_terminal2_ubuntu.png b/learn/persistent_storage/img/wsl_terminal2_ubuntu.png new file mode 100644 index 00000000..136b3d97 Binary files /dev/null and b/learn/persistent_storage/img/wsl_terminal2_ubuntu.png differ diff --git a/learn/persistent_storage/img/wsl_terminal_ubuntu.png b/learn/persistent_storage/img/wsl_terminal_ubuntu.png new file mode 100644 index 00000000..88dacf69 Binary files /dev/null and b/learn/persistent_storage/img/wsl_terminal_ubuntu.png differ diff --git a/learn/persistent_storage/index.html b/learn/persistent_storage/index.html new file mode 100644 index 00000000..1b52cbb5 --- /dev/null +++ b/learn/persistent_storage/index.html @@ -0,0 +1,1102 @@ + + + + + + + + + + + + + + + + + + Persistent Storage for Kubernetes - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEARN »
  • +
  • Persistent Storage for Kubernetes
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

This is a free learning resource from HPE which walks you through various exercises to get you familiar with Kubernetes and provisioning Persistent storage using HPE Nimble Storage and HPE Primera storage systems. This guide is by no means a comprehensive overview of the capabilities of Kubernetes but rather a getting started guide for individuals who wants to learn how to use Kubernetes with persistent storage.

+ +



+

Kubernetes cluster

+

In Kubernetes, nodes within a cluster pool together their resources (memory and CPU) to distribute workloads. A cluster is comprised of control plane and worker nodes that allow you to run your containerized workloads.

+
Control plane
+

The Kubernetes control plane is responsible for maintaining the desired state of your cluster. When you interact with Kubernetes, such as by using the kubectl command-line interface, you’re communicating with your cluster’s Kubernetes API services running on the control plane. Control plane refers to a collection of processes managing the cluster state.

+
Nodes
+

Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods.

+

Kubernetes Objects

+

Programs running on Kubernetes are packaged as containers which can run on Linux or Windows. A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

+
Pods
+

A Pod is the basic execution unit of a Kubernetes application–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod encapsulates an application’s container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run.

+
Persistent Volumes
+

Because programs running on your cluster aren’t guaranteed to run on a specific node, data can’t be saved to any arbitrary place in the file system. If a program tries to save data to a file for later, but is then relocated onto a new node, the file will no longer be where the program expects it to be.

+

To store data permanently, Kubernetes uses a PersistentVolume. Local, external storage via SAN arrays, or cloud drives can be attached to the cluster as a PersistentVolume.

+
Namespaces
+



+Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called Namespaces. Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple users.

+
Deployments
+

A Deployment provides declarative updates for Pods. You declare a desired state for your Pods in your Deployment and Kubernetes will manage it for you automatically.

+
Services
+

A Kubernetes Service object defines a policy for external clients to access an application within a cluster. By default, the container runtime uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for containers to communicate across nodes, there must be allocated ports on the machine’s own IP address, which are then forwarded or proxied to the containers. Coordinating port allocations is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that Pods can communicate with other Pods, regardless of which host they land on. Kubernetes gives every Pod its own cluster-private IP address, through a Kubernetes Service object, so you do not need to explicitly create links between Pods or map container ports to host ports. This means that containers within a Pod can all reach each other’s ports on localhost, and all Pods in a cluster can see each other without NAT.

+
+

Lab 1: Tour your cluster

+

All of this information presented here is taken from the official documentation found on kubernetes.io/docs.

+

Overview of kubectl

+

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete list of kubectl operations, see Overview of kubectl on kubernetes.io.

+

For more information on how to install and setup kubectl on Linux, Windows or MacOS, see Install and Set Up kubectl on kubernetes.io.

+

Syntax

+

Use the following syntax to run kubectl commands from your terminal window:

+

kubectl [command] [TYPE] [NAME] [flags]

+

where command, TYPE, NAME, and flags are:

+
    +
  • +

    command: Specifies the operation that you want to perform on one or more resources, for example create, get, describe, delete.

    +
  • +
  • +

    TYPE: Specifies the resource type. Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:

    +
  • +
  • +

    NAME: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods.

    +
  • +
+

Get object example command:

+

kubectl get nodes
+kubectl get node <node_name>
+

+

Describe object example command:

+

kubectl describe node <node_name>
+

+

Create object example command

+

kubectl create -f <file_name or URL>
+

+

The below YAML declarations are meant to be created with kubectl create. Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this:

+

kubectl create -f- (press Enter)
+< paste the YAML >
+(CTRL-D for Linux) or (^D for Mac users)
+

+
+

Kubernetes Cheat Sheet

+

Find more available commands at Kubernetes Cheat Sheet on kubernetes.io.

+
+

Getting to know your cluster:

+

Let's run through some simple kubectl commands to get familiar with your cluster.

+

First we need to open a terminal window, the following commands can be run from a Windows, Linux or Mac. In this guide, we will be using the Window Subsystem for Linux (WSL) which allows us to have a Linux terminal within Windows.

+

To start a WSL terminal session, click the Ubuntu icon in the Windows taskbar.

+

+

It will open a terminal window. We will be working within this terminal through out this lab.

+

+

In order to communicate with the Kubernetes cluster, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the --kubeconfig flag.

+

You will need to request the kubeconfig file from your cluster administrator and copy the file to your local $HOME/.kube/ directory. You may need to create this directory.

+

Once you have the kubeconfig file, you can view the config file:

+

kubectl config view
+

+

Check that kubectl and the config file are properly configured by getting the cluster state.

+

kubectl cluster-info
+

+

If you see a URL response, kubectl is correctly configured to access your cluster.

+

The output is similar to this:

+

$ kubectl cluster-info
+Kubernetes control plane is running at https://192.168.1.50:6443
+KubeDNS is running at https://192.168.1.50:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
+
+To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
+

+

Now let's look at the nodes within our cluster.

+

kubectl get nodes
+

+

You should see output similar to below. As you can see, each node has a role control-plane or as worker nodes (<none>).

+

$ kubectl get nodes
+NAME          STATUS   ROLES                  AGE     VERSION
+kube-group1   Ready    control-plane,master   2d18h   v1.21.5
+...
+

+

You can list pods.

+

kubectl get pods
+

+
+

Quiz

+

Did you see any Pods listed when you ran kubectl get pods? Why?

If you don't see any Pods listed, it is because there are no Pods deployed within the "default" Namespace. Now run, kubectl get pods --all-namespaces. Does it look any different?

Pay attention to the first column, NAMESPACES. In our case, we are working in the "default" Namespace. Depending on the type of application and your user access level, applications can be deployed within one or more Namespaces.

If you don't see the object (deployment, pod, services, etc) you are looking for, double-check the Namespace it was deployed under and use the -n <namespace> flag to view objects in other Namespaces.

+
+

Once complete, type "Clear" to clear your terminal window.

+
+

Lab 2: Deploy your first Pod (Stateless)

+

A Pod is a collection of containers sharing a network and mount namespace and is the basic unit of deployment in Kubernetes. All containers in a Pod are scheduled on the same node. In our first demo we will deploy a stateless application that has no persistent storage attached. Without persistent storage, any modifications done to the application will be lost if that application is stopped or deleted.

+

Here is a sample NGINX webserver deployment.

+

apiVersion: apps/v1
+kind: Deployment
+metadata:
+  labels:
+    run: nginx
+  name: first-nginx-pod
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      run: nginx-first-pod
+  template:
+    metadata:
+      labels:
+        run: nginx-first-pod
+    spec:
+      containers:
+      - image: nginx
+        name: nginx
+

+

Open a WSL terminal session, if you don't have one open already.

+

+

At the prompt, we will start by deploying the NGINX example above, by running:

+

kubectl create -f https://scod.hpedev.io/learn/persistent_storage/yaml/nginx-stateless-deployment.yaml
+

+

We can see the Deployment was successfully created and the NGINX Pod is running.

+
+

Note

+

The Pod names will be unique to your deployment.

+
+

$ kubectl get deployments.apps
+NAME              READY   UP-TO-DATE   AVAILABLE   AGE
+first-nginx-pod   1/1     1            1           38s
+
+$ kubectl get pods
+NAME                             READY   STATUS    RESTARTS   AGE
+first-nginx-pod-8d7bb985-rrdv8   1/1     Running   0          10s
+

+
+

Important

+

In a Deployment, a Pod name is generated using the Deployment name and then a randomized hash (i.e. first-nginx-pod-8d7bb985-kql7t) to ensure that each Pod has a unique name. During this lab exercise, make sure to reference the correct object names that are generated in each exercise.

+
+

We can inspect the Pod further using the kubectl describe command.

+
+

Note

+

You can use tab completion to help with Kubernetes commands and objects. Start typing the first few letters of the command or Kubernetes object (i.e Pod) name and hit TAB and it should autofill the name.

+
+

kubectl describe pod <pod_name> 
+

+

The output should be similar to this. Note, the Pod name will be unique to your deployment.

+

Name:         first-nginx-pod-8d7bb985-rrdv8
+Namespace:    default
+Priority:     0
+Node:         kube-group1/10.90.200.11
+Start Time:   Mon, 01 Nov 2021 13:37:59 -0500
+Labels:       pod-template-hash=8d7bb985
+              run=nginx-first-pod
+Annotations:  cni.projectcalico.org/podIP: 192.168.162.9/32
+              cni.projectcalico.org/podIPs: 192.168.162.9/32
+Status:       Running
+IP:           192.168.162.9
+IPs:
+  IP:           192.168.162.9
+Controlled By:  ReplicaSet/first-nginx-pod-8d7bb985
+Containers:
+  nginx:
+    Container ID:   docker://3610d71c054e6b8fdfffbf436511fda048731a456b9460ae768ae7db6e831398
+    Image:          nginx
+    Image ID:       docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
+    Port:           <none>
+    Host Port:      <none>
+    State:          Running
+      Started:      Mon, 01 Nov 2021 13:38:06 -0500
+    Ready:          True
+    Restart Count:  0
+    Environment:    <none>
+    Mounts:
+      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7sbw (ro)
+Conditions:
+  Type              Status
+  Initialized       True
+  Ready             True
+  ContainersReady   True
+  PodScheduled      True
+Volumes:
+  kube-api-access-w7sbw:
+    Type:                    Projected (a volume that contains injected data from multiple sources)
+    TokenExpirationSeconds:  3607
+    ConfigMapName:           kube-root-ca.crt
+    ConfigMapOptional:       <nil>
+    DownwardAPI:             true
+QoS Class:                   BestEffort
+Node-Selectors:              <none>
+Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
+                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
+Events:
+  Type    Reason     Age    From               Message
+  ----    ------     ----   ----               -------
+  Normal  Scheduled  5m14s  default-scheduler  Successfully assigned default/first-nginx-pod-8d7bb985-rrdv8 to kube-group1
+  Normal  Pulling    5m13s  kubelet            Pulling image "nginx"
+  Normal  Pulled     5m7s   kubelet            Successfully pulled image "nginx" in 5.95086952s
+  Normal  Created    5m7s   kubelet            Created container nginx
+  Normal  Started    5m7s   kubelet            Started container nginx
+

+

Looking under the "Events" section is a great place to start when checking for issues or errors during Pod creation.

+

At this stage, the NGINX application is only accessible from within the cluster. Use kubectl port-forward to expose the Pod temporarily outside of the cluster to your workstation.

+

kubectl port-forward <pod_name> 80:80
+

+

The output should be similar to this:

+

kubectl port-forward first-nginx-pod-8d7bb985-rrdv8 80:80
+Forwarding from 127.0.0.1:80 -> 8080
+Forwarding from [::1]:80 -> 8080
+

+
+

Note

+

If you have something already running locally on port 80, modify the port-forward to an unused port (i.e. 5000:80). Port-forward is meant for temporarily exposing an application outside of a Kubernetes cluster. For a more permanent solution, look into Ingress Controllers.

+
+

Finally, open a browser and go to http://127.0.0.1 and you should see the following.

+

+

You have successfully deployed your first Kubernetes pod.

+

With the Pod running, you can log in and explore the Pod.

+

To do this, open a second terminal, by clicking on the WSL terminal icon again. The first terminal should have kubectl port-forward still running.

+

+

Run:

+

kubectl exec -it <pod_name> -- /bin/bash
+

+

You can explore the Pod and run various commands. Some commands might not be available within the Pod. Why would that be?

+

root@first-nginx-pod-8d7bb985-rrdv8:/# df -h
+Filesystem               Size  Used Avail Use% Mounted on
+overlay                   46G  8.0G   38G  18% /
+tmpfs                     64M     0   64M   0% /dev
+tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
+/dev/mapper/centos-root   46G  8.0G   38G  18% /etc/hosts
+shm                       64M     0   64M   0% /dev/shm
+tmpfs                    1.9G   12K  1.9G   1% /run/secrets/kubernetes.io/serviceaccount
+tmpfs                    1.9G     0  1.9G   0% /proc/acpi
+tmpfs                    1.9G     0  1.9G   0% /proc/scsi
+tmpfs                    1.9G     0  1.9G   0% /sys/firmware
+
+

+

While inside the container, you can also modify the webpage.

+

echo "<h1>Hello from the HPE Storage Hands on Labs</h1>" > /usr/share/nginx/html/index.html
+

+

Now switch back over to the browser and refresh the page (http://127.0.0.1), you should see the updated changes to the webpage.

+

Once ready, switch back over to your second terminal, type exit to logout of the NGINX container and close that terminal. Back in your original terminal, use Ctrl+C to exit the port-forwarding.

+

Since this is a stateless application, we will now demonstrate what happens if the NGINX Pod is lost.

+

To do this, simply delete the Pod.

+

kubectl delete pod <pod_name>
+

+

Now run kubectl get pods to see that a new NGINX Pod has been created.

+

Lets use kubectl port-forward again to look at the NGINX application.

+

kubectl port-forward <new_pod_name> 80:80
+

+

Back in your browser, refresh the page (http://127.0.0.1) and you should the webpage has reverted back to its default state.

+

+

Back in the terminal, use Ctrl+C to exit the port-forwarding and once ready, type clear to refresh your terminal.

+

The NGINX application has reverted back to default because we didn't store the modifications we made to a location that would persist beyond the life of the container. There are many applications where persistence isn't critical (i.e. Google uses stateless containers for your browser web searches) as they perform computations that are either stored into an external database or passed to subsequent processes.

+

As mission-critical workloads move into Kubernetes, the need for stateful containers is increasingly important. The following exercises will go through how to provision persistent storage to applications using the HPE CSI Driver for Kubernetes backed by HPE Primera or Nimble Storage.

+
+

Lab 3: Install the HPE CSI Driver for Kubernetes

+

To get started with the deployment of the HPE CSI Driver for Kbuernetes, the CSI driver is deployed using industry standard means, either a Helm chart or an Operator. For this tutorial, we will be using Helm to the deploy the HPE CSI driver for Kubernetes.

+

The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on Artifact Hub. There, you will find the configuration and installation instructions for the chart.

+
+

Note

+

Helm is the package manager for Kubernetes. Software is delivered in a format called a "chart". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG file.

+
+

Installing the Helm chart

+

Open a WSL terminal session, if you don't have one open already.

+

+

To install the chart with the name my-hpe-csi-driver, add the HPE CSI Driver for Kubernetes Helm repo.

+

helm repo add hpe-storage https://hpe-storage.github.io/co-deployments
+helm repo update
+

+

Install the latest chart.

+

kubectl create ns hpe-storage
+helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage
+

+

Wait a few minutes as the deployment finishes.

+

Verify that everything is up and running correctly by listing out the Pods.

+

kubectl get pods -n hpe-storage
+

+

The output is similar to this:

+
+

Note

+

The Pod names will be unique to your deployment.

+
+

$ kubectl get pods -n hpe-storage
+NAME                                      READY   STATUS    RESTARTS   AGE
+pod/hpe-csi-controller-6f9b8c6f7b-n7zcr   9/9     Running   0          7m41s
+pod/hpe-csi-node-npp59                    2/2     Running   0          7m41s
+pod/nimble-csp-5f6cc8c744-rxgfk           1/1     Running   0          7m41s
+pod/primera3par-csp-7f78f498d5-4vq9r      1/1     Running   0          7m41s
+

+

If all of the components show in Running state, then the HPE CSI Driver for Kubernetes and the corresponding Container Storage Providers (CSP) for HPE Alletra, Primera and Nimble Storage have been successfully deployed.

+
+

Important

+

With the HPE CSI Driver deployed, the rest of this guide is designed to demonstrate the usage of the CSI driver with HPE Primera or Nimble Storage. You will need to choose which storage system (HPE Primera or Nimble Storage) to use for the rest of the exercises. While the HPE CSI Driver supports connectivity to multiple backends, configurating multiple backends is outside of the scope of this lab guide.

+
+

Creating a Secret

+

Once the HPE CSI Driver has been deployed, a Secret needs to be created in order for the CSI driver to communicate to the HPE Primera or Nimble Storage. This Secret, which contains the storage system IP and credentials, is used by the CSI driver sidecars within the StorageClass to authenticate to a specific backend for various CSI operations. For more information, see adding an HPE storage backend

+

Here is an example Secret.

+

apiVersion: v1
+kind: Secret
+metadata:
+  name: custom-secret
+  namespace: hpe-storage 
+stringData:
+  serviceName: primera3par-csp-svc 
+  servicePort: "8080"
+  backend: 10.10.0.2
+  username: <user>
+  password: <password>
+

+

Download and modify, using the text editor of your choice, the Secret file with the backend IP per your environment.

+
wget https://raw.githubusercontent.com/hpe-storage/scod/master/docs/learn/persistent_storage/yaml/nimble-secret.yaml
+
wget https://raw.githubusercontent.com/hpe-storage/scod/master/docs/learn/persistent_storage/yaml/primera-secret.yaml
+
+

Save the file and create the Secret within the cluster.

+
kubectl create -f nimble-secret.yaml
+
kubectl create -f primera-secret.yaml
+
+

The Secret should now be available in the "hpe-storage" Namespace:

+

kubectl -n hpe-storage get secret/custom-secret
+NAME                     TYPE          DATA      AGE
+custom-secret            Opaque        5         1m
+

+

If you made a mistake when creating the Secret, simply delete the object (kubectl -n hpe-storage delete secret/custom-secret) and repeat the steps above.

+

Creating a StorageClass

+

Now we will create a StorageClass that will be used in the following exercises. A StorageClass (SC) specifies which storage provisioner to use (in our case the HPE CSI Driver) and the volume parameters (such as Protection Templates, Performance Policies, CPG, etc.) for the volumes that we want to create which can be used to differentiate between storage levels and usages.

+

This concept is sometimes called “profiles” in other storage systems. A cluster can have multiple StorageClasses allowing users to create storage claims tailored for their specific application requirements.

+

We will start by creating a StorageClass called hpe-standard. We will use the custom-secret created in the previous step and specify the hpe-storage namespace where the CSI driver was deployed.

+

Here is an example StorageClasses for HPE Primera and Nimble Storage systems and some of the available volume parameters that can be defined. See the respective CSP for more elaborate examples.

+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-standard
+  annotations:
+    storageclass.kubernetes.io/is-default-class: "true"
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/provisioner-secret-name: custom-secret
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: custom-secret
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: custom-secret
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: custom-secret
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-expand-secret-name: custom-secret
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  performancePolicy: "SQL Server"
+  description: "Volume from HPE CSI Driver"
+  accessProtocol: iscsi
+  limitIops: "76800"
+  allowOverrides: description,limitIops,performancePolicy
+allowVolumeExpansion: true
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-standard
+  annotations:
+    storageclass.kubernetes.io/is-default-class: "true"
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/provisioner-secret-name: custom-secret
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: custom-secret
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: custom-secret
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: custom-secret
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-expand-secret-name: custom-secret
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  cpg: SSD_r6
+  provisioningType: tpvv
+  accessProtocol: iscsi
+  allowOverrides: cpg,provisioningType
+allowVolumeExpansion: true
+
+

Create the StorageClass within the cluster

+
kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/nimble-storageclass.yaml
+
kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/primera-storageclass.yaml
+
+

We can verify the StorageClass is now available.

+

kubectl get sc
+NAME                     PROVISIONER   AGE
+hpe-standard (default)   csi.hpe.com   2m
+

+
+

Note

+

You can create multiple StorageClasses to match the storage requirements of your applications. We set hpe-standard StorageClass as default using the annotation storageclass.kubernetes.io/is-default-class: "true". There can only be one default StorageClass per cluster, for any additional StorageClasses set this to false. To learn more about configuring a default StorageClass, see Default StorageClass on kubernetes.io.

+
+
+

Lab 4: Creating a Persistent Volume using HPE Storage

+

With the HPE CSI Driver for Kubernetes deployed and a StorageClass available, we can now provision persistent volumes.

+
    +
  • +

    A PersistentVolumeClaim (PVC) is a request for storage by a user. Claims can request storage of a specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany). The accessMode will be dependent on the type of storage system and the application requirements. Block storage like HPE Primera and Nimble Storage, provision volumes using ReadWriteOnce access mode where the volume can only be mounted to a single node within the cluster at a time. Any applications running on that node can access that volume. Applications deployed across multiple nodes within a cluster that require shared access (ReadWriteMany) to the same PersistentVolume will need to use NFS or a distribute storage system such as MapR, Gluster or Ceph.

    +
  • +
  • +

    A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.

    +
  • +
+

Creating a PersistentVolumeClaim

+

With a StorageClass available, we can request an amount of storage for our application using a PersistentVolumeClaim. Here is a sample PVC.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: my-pvc
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 50Gi
+

+
+

Note

+

We don't have a StorageClass (SC) explicitly defined within this PVC therefore it will use the default StorageClass. You can use spec.storageClassName to override the default SC with another one available to the cluster.

+
+

Create the PersistentVolumeClaim.

+

kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/my-pvc.yaml
+

+

We can see the my-pvc PersistentVolumeClaim was created.

+

kubectl get pvc
+NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
+my-pvc                        Bound    pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8   50Gi       RWO            hpe-standard   72m
+

+
+

Note

+

The Persistent Volume name is a randomly generated name by Kubernetes. For consistent naming for your stateful applications, check out StatefulSet deployment model. These names can be used to track the volume back to the storage system. It is important to note that HPE Primera has a 30 character limit on volume names therefore the name will be truncated. For example: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 will be truncated to pvc-70d5caf8-7558-40e6-a8b7-77d on an HPE Primera system.

+
+

We can inspect the PVC further for additional information including event logs for troubleshooting.

+

kubectl describe pvc my-pvc
+

+

Check the Events section to see if there were any issues during creation.

+

The output is similar to this:

+

$ kubectl describe pvc my-pvc
+Name:          my-pvc
+Namespace:     default
+StorageClass:  hpe-standard
+Status:        Bound
+Volume:        pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8
+Labels:        <none>
+Annotations:   pv.kubernetes.io/bind-completed: yes
+               pv.kubernetes.io/bound-by-controller: yes
+               volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com
+Finalizers:    [kubernetes.io/pvc-protection]
+Capacity:      50Gi
+Access Modes:  RWO
+VolumeMode:    Filesystem
+Mounted By:    <none>
+Events:        <none>
+

+

We can also inspect the PersistentVolume (PV) in a similar manner. Note, the volume name will be unique to your deployment.

+

kubectl describe pv <volume_name>
+

+

The output is similar to this:

+

$ kubectl describe pv pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8
+Name:            pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8
+Labels:          <none>
+Annotations:     pv.kubernetes.io/provisioned-by: csi.hpe.com
+Finalizers:      [kubernetes.io/pv-protection]
+StorageClass:    hpe-standard
+Status:          Bound
+Claim:           default/my-pvc
+Reclaim Policy:  Delete
+Access Modes:    RWO
+VolumeMode:      Filesystem
+Capacity:        50Gi
+Node Affinity:   <none>
+Message:
+Source:
+    Type:              CSI (a Container Storage Interface (CSI) volume source)
+    Driver:            csi.hpe.com
+    VolumeHandle:      063aba3d50ec99d866000000000000000000000001
+    ReadOnly:          false
+    VolumeAttributes:      accessProtocol=iscsi
+                           allowOverrides=description,limitIops,performancePolicy
+                           description=Volume from HPE CSI Driver
+                           fsType=xfs
+                           limitIops=76800
+                           performancePolicy=SQL Server
+                           storage.kubernetes.io/csiProvisionerIdentity=1583271972595-8081-csi.hpe.com
+                           volumeAccessMode=mount
+Events:                <none>
+

+

With the describe command, you can see the volume parameters used to create this volume. In this case, Nimble Storage parameters performancePolicy, limitIops, etc.

+
+

Important

+

If the PVC is stuck in Pending state, double check the Secret and Namespace are correct within the StorageClass(sc) and that the volume parameters are valid. If necessary delete the object (sc or pvc) (kubectl delete <object_type> <object_name>) and repeat the steps above.

+
+

Let's recap what we have learned.

+
    +
  1. We created a default StorageClass for our volumes.
  2. +
  3. We created a PVC that created a volume from the storageClass.
  4. +
  5. We can use kubectl get to list the StorageClass, PVC and PV.
  6. +
  7. We can use kubectl describe to get details on the StorageClass, PVC or PV
  8. +
+

At this point, we have validated the deployment of the HPE CSI Driver and are ready to deploy an application with persistent storage.

+
+

Lab 5: Deploying a Stateful Application using HPE Storage (WordPress)

+

To begin, we will create two PersistentVolumes for the WordPress application using the default hpe-standard StorageClass we created previously. If you don't have the hpe-standard StorageClass available, please refer to the StorageClass section for instructions on creating a StorageClass.

+

Create a PersistentVolumeClaim for the MariaDB database that will used by WordPress.

+

kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/wordpress-mariadb-pvc.yaml
+

+

Next let's make another volume for the WordPress application.

+

kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/my-wordpress-pvc.yaml
+

+

Now verify the PersistentVolumes were created successfully. The output should be similar to the following. Note, the volume names will be unique to your deployment.

+

kubectl get pvc
+NAME                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
+data-my-wordpress-mariadb-0   Bound     pvc-1abdb7d7-374e-45b3-8fa1-534131ec7ec6   50Gi       RWO            hpe-standard   1m
+my-wordpress                  Bound     pvc-ff6dc8fd-2b14-4726-b608-be8b27485603   20Gi       RWO            hpe-standard   1m
+

+

The above output means that the HPE CSI Driver has successfully provisioned two volumes based upon the default hpe-standard StorageClass. At this stage, the volumes are not attached (exported) to any nodes yet. They will only be attached (exported) to a node once a scheduled workload requests the PersistentVolumeClaims.

+

We will use Helm again to deploy WordPress using the PersistentVolumeClaims we just created. When WordPress is deployed, the volumes will be attached, formatted and mounted.

+

The first step is to add the WordPress chart to Helm. The output should be similar to below.

+

helm repo add bitnami https://charts.bitnami.com/bitnami
+helm repo update
+helm search repo bitnami/wordpress
+NAME                    CHART VERSION   APP VERSION     DESCRIPTION
+bitnami/wordpress       11.0.13         5.7.2           Web publishing platform for building blogs and ...
+

+

Next, deploy WordPress by setting the deployment parameter persistence.existingClaim=<existing_PVC> to the PVC my-wordpress created in the previous step.

+

helm install my-wordpress bitnami/wordpress --version 9.2.1 --set service.type=ClusterIP,wordpressUsername=admin,wordpressPassword=adminpassword,mariadb.mariadbRootPassword=secretpassword,persistence.existingClaim=my-wordpress,allowEmptyPassword=false 
+

+

Check to verify that WordPress and MariaDB were deployed and are in the Running state. This may take a few minutes.

+
+

Note

+

The Pod names will be unique to your deployment.

+
+

kubectl get pods
+NAME                            READY     STATUS    RESTARTS   AGE
+my-wordpress-69b7976c85-9mfjv   1/1       Running   0          2m
+my-wordpress-mariadb-0          1/1       Running   0          2m
+

+

Finally take a look at the WordPress site. Again, we can use kubectl port-forward to access the WordPress application and verify everything is working correctly.

+

kubectl port-forward svc/my-wordpress 80:80
+

+
+

Note

+

If you have something already running locally on port 80, modify the port-forward to an unused port (i.e. 5000:80).

+
+

Open a browser on your workstation to http://127.0.0.1 and you should see, "Hello World!".

+

Access the admin console at: http://127.0.0.1/admin using the "admin/adminpassword" we specified when deploying the Helm Chart.

+

+

Create a new blog post so you have data stored in the WordPress application.

+

Happy Blogging!

+

Once ready, hit "Ctrl+C" in your terminal to stop the port-forward.

+

Verify the Wordpress application is using the my-wordpress and data-my-wordpress-mariadb-0 PersistentVolumeClaims.

+

kubectl get pods -o=jsonpath='{.items[*].spec.volumes[*].persistentVolumeClaim.claimName}'
+

+

With the WordPress application using persistent storage for the database and the application data, in the event of a crash of the WordPress application, the PVC will be remounted to the new Pod.

+

Delete the WordPress Pod.

+

kubectl delete pod <my-wordpress_pod_name>
+

+

For example.

+

$ kubectl delete pod my-wordpress-69b7976c85-9mfjv
+pod "my-wordpress-69b7976c85-9mfjv" deleted
+

+

Now if run kubectl get pods and you should see the WordPress Pod recreating itself with a new name. This may take a few minutes.

+

Output should be similar to the following as the WordPress container is recreating.

+

$ kubectl get pods
+NAME                             READY   STATUS              RESTARTS   AGE
+my-wordpress-mariadb-0           1/1     Running             1          10m
+my-wordpress-7856df6756-m2nw8    0/1     ContainerCreating   0          33s
+

+

Once the WordPress Pod is in Ready state, we can verify that the Wordpress application is still using the my-wordpress and data-my-wordpress-mariadb-0 PersistentVolumeClaims.

+

kubectl get pods -o=jsonpath='{.items[*].spec.volumes[*].persistentVolumeClaim.claimName}'
+

+

And finally, run kubectl port-forward again to see the changes made to the WordPress application survived deleting the application Pod.

+

kubectl port-forward svc/my-wordpress 80:80
+

+

Open a browser on your workstation to http://127.0.0.1 and you should see your WordPress site running.

+

This completes the tutorial of using the HPE CSI Driver with HPE storage to create Persistent Volumes within Kubernetes. This is just the beginning of the capabilities of the HPE Storage integrations within Kubernetes. We recommend exploring SCOD further and the specific HPE Storage CSP (Nimble, Primera, and 3PAR) to learn more.

+

Optional Lab: Advanced Configuration

+

Configuring additional storage backends

+

It's not uncommon to have multiple HPE primary storage systems within the same environment, either the same family or different ones. This section walks through the scenario of managing multiple StorageClass and Secret API objects to represent an environment with multiple systems.

+

To view the current Secrets in the hpe-storage Namespace (assuming default names):

+

kubectl -n hpe-storage get secret
+NAME                     TYPE          DATA      AGE
+custom-secret            Opaque        5         10m
+

+

This Secret is used by the CSI sidecars in the StorageClass to authenticate to a specific backend for CSI operations. In order to add a new Secret or manage access to multiple backends, additional Secrets will need to be created per backend.

+

In the previous steps, if you connected to Nimble Storage, create a new Secret for the Primera array or if you connected to Primera array above then create a Secret for the Nimble Storage.

+
+

Secret Requirements

+
    +
  • Each Secret name must be unique.
  • +
  • servicePort should be set to 8080.
  • +
+
+

Using your text editor of choice, create a new Secret, specify the name, Namespace, backend username, backend password and the backend IP address to be used by the CSP and save it as gold-secret.yaml.

+
apiVersion: v1
+kind: Secret
+metadata:
+  name: gold-secret
+  namespace: hpe-storage
+stringData:
+  serviceName: nimble-csp-svc
+  servicePort: "8080"
+  backend: 192.168.1.2
+  username: admin
+  password: admin
+
apiVersion: v1
+kind: Secret
+metadata:
+  name: gold-secret
+  namespace: hpe-storage
+stringData:
+  serviceName: primera3par-csp-svc
+  servicePort: "8080"
+  backend: 10.10.0.2
+  username: 3paradm
+  password: 3pardata
+
+

Create the Secret using kubectl:

+

kubectl create -f gold-secret.yaml
+

+

You should now see the Secret in the "hpe-storage" Namespace:

+

kubectl -n hpe-storage get secret
+NAME                     TYPE          DATA      AGE
+gold-secret              Opaque        5         1m
+custom-secret            Opaque        5         15m
+

+

Create a StorageClass with the new Secret

+

To use the new gold-secret, create a new StorageClass using the Secret and the necessary StorageClass parameters. Please see the requirements section of the respective CSP.

+

We will start by creating a StorageClass called hpe-gold. We will use the gold-secret created in the previous step and specify the hpe-storage Namespace where the CSI driver was deployed.

+
+

Note

+

Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.

+
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-gold
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/provisioner-secret-name: gold-secret
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: gold-secret
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: gold-secret
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: gold-secret
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-expand-secret-name: gold-secret
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  performancePolicy: "SQL Server"
+  description: "Volume from HPE CSI Driver"
+  accessProtocol: iscsi
+  limitIops: "76800"
+  allowOverrides: description,limitIops,performancePolicy
+allowVolumeExpansion: true
+
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: hpe-gold
+provisioner: csi.hpe.com
+parameters:
+  csi.storage.k8s.io/fstype: xfs
+  csi.storage.k8s.io/provisioner-secret-name: gold-secret
+  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-publish-secret-name: gold-secret
+  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-stage-secret-name: gold-secret
+  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
+  csi.storage.k8s.io/node-publish-secret-name: gold-secret
+  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+  csi.storage.k8s.io/controller-expand-secret-name: gold-secret
+  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+  cpg: SSD_r6
+  provisioningType: tpvv
+  accessProtocol: iscsi
+  allowOverrides: cpg,provisioningType
+allowVolumeExpansion: true
+
+

We can verify the StorageClass is now available.

+

kubectl get sc
+NAME                     PROVISIONER   AGE
+hpe-standard (default)   csi.hpe.com   15m
+hpe-gold                 csi.hpe.com   1m
+

+
+

Note

+

Don't forget to call out the StorageClass explicitly when creating PVCs from non-default StorageClasses.

+
+

Creating a PersistentVolumeClaim

+

With a StorageClass available, we can request an amount of storage for our application using a PersistentVolumeClaim. Using your text editor of choice, create a new PVC and save it as gold-pvc.yaml.

+

apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: gold-pvc
+spec:
+  accessModes:
+  - ReadWriteOnce
+  resources:
+    requests:
+      storage: 50Gi
+  storageClassName: hpe-gold
+

+

Create the PersistentVolumeClaim.

+

kubectl create -f gold-pvc.yaml
+

+

We can see the my-pvc PersistentVolumeClaim was created.

+

kubectl get pvc
+NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
+my-pvc                        Bound    pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8   50Gi       RWO            hpe-standard   72m
+gold-pvc                      Bound    pvc-7a74d656-0b14-42a2-9437-e374a5d3bd68   50Gi       RWO            hpe-gold       1m
+

+

You can see that the new PVC is using the new StorageClass which is backed by the additional storage backend allowing you to add additional flexibility to your containerized workloads and match the persistent storage requirements to the application.

+

Cleanup (Optional)

+

As others will be using this lab at a later time, we can clean up the objects that were deployed during this lab exercise.

+
+

Note

+

These steps may take a few minutes to complete. Please be patient and don't cancel out the process.

+
+

Remove WordPress & NGINX deployments.

+

helm uninstall my-wordpress && kubectl delete all --all
+

+

Delete the PersistentVolumeClaims and related objects.

+

kubectl delete pvc --all && kubectl delete sc --all
+

+

Remove the HPE CSI Driver for Kubernetes.

+

helm uninstall my-hpe-csi-driver -n hpe-storage
+

+

It takes a couple minutes to cleanup the objects from the CSI driver. You can check the status:

+

watch kubectl get all -n hpe-storage
+

+

Once everything is removed, Ctrl+C to exit and finally you can remove the Namespace.

+

kubectl delete ns hpe-storage
+

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/learn/persistent_storage/yaml/my-pvc.yaml b/learn/persistent_storage/yaml/my-pvc.yaml new file mode 100644 index 00000000..d7170239 --- /dev/null +++ b/learn/persistent_storage/yaml/my-pvc.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: my-pvc +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Gi \ No newline at end of file diff --git a/learn/persistent_storage/yaml/my-wordpress-pvc.yaml b/learn/persistent_storage/yaml/my-wordpress-pvc.yaml new file mode 100644 index 00000000..6fbe5951 --- /dev/null +++ b/learn/persistent_storage/yaml/my-wordpress-pvc.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: my-wordpress +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 20Gi \ No newline at end of file diff --git a/learn/persistent_storage/yaml/nginx-stateless-deployment.yaml b/learn/persistent_storage/yaml/nginx-stateless-deployment.yaml new file mode 100644 index 00000000..8cb0cd95 --- /dev/null +++ b/learn/persistent_storage/yaml/nginx-stateless-deployment.yaml @@ -0,0 +1,19 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + run: nginx + name: first-nginx-pod +spec: + replicas: 1 + selector: + matchLabels: + run: nginx-first-pod + template: + metadata: + labels: + run: nginx-first-pod + spec: + containers: + - image: nginx + name: nginx \ No newline at end of file diff --git a/learn/persistent_storage/yaml/nimble-secret.yaml b/learn/persistent_storage/yaml/nimble-secret.yaml new file mode 100644 index 00000000..f2d14871 --- /dev/null +++ b/learn/persistent_storage/yaml/nimble-secret.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Secret +metadata: + name: custom-secret + namespace: hpe-storage +stringData: + serviceName: nimble-csp-svc + servicePort: "8080" + backend: 192.168.1.2 + username: admin + password: "!HPEstorage2050" \ No newline at end of file diff --git a/learn/persistent_storage/yaml/nimble-storageclass.yaml b/learn/persistent_storage/yaml/nimble-storageclass.yaml new file mode 100644 index 00000000..24a232f3 --- /dev/null +++ b/learn/persistent_storage/yaml/nimble-storageclass.yaml @@ -0,0 +1,25 @@ +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: hpe-standard + annotations: + storageclass.kubernetes.io/is-default-class: "true" +provisioner: csi.hpe.com +parameters: + csi.storage.k8s.io/fstype: xfs + csi.storage.k8s.io/provisioner-secret-name: custom-secret + csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage + csi.storage.k8s.io/controller-publish-secret-name: custom-secret + csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage + csi.storage.k8s.io/node-stage-secret-name: custom-secret + csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage + csi.storage.k8s.io/node-publish-secret-name: custom-secret + csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage + csi.storage.k8s.io/controller-expand-secret-name: custom-secret + csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage + performancePolicy: "SQL Server" + description: "Volume from HPE CSI Driver" + accessProtocol: iscsi + limitIops: "76800" + allowOverrides: description,limitIops,performancePolicy +allowVolumeExpansion: true \ No newline at end of file diff --git a/learn/persistent_storage/yaml/primera-secret.yaml b/learn/persistent_storage/yaml/primera-secret.yaml new file mode 100644 index 00000000..3a1eb175 --- /dev/null +++ b/learn/persistent_storage/yaml/primera-secret.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Secret +metadata: + name: custom-secret + namespace: hpe-storage +stringData: + serviceName: primera3par-csp-svc + servicePort: "8080" + backend: 10.10.0.2 + username: 3paradm + password: 3pardata \ No newline at end of file diff --git a/learn/persistent_storage/yaml/primera-storageclass.yaml b/learn/persistent_storage/yaml/primera-storageclass.yaml new file mode 100644 index 00000000..6d03380b --- /dev/null +++ b/learn/persistent_storage/yaml/primera-storageclass.yaml @@ -0,0 +1,24 @@ +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: hpe-standard + annotations: + storageclass.kubernetes.io/is-default-class: "true" +provisioner: csi.hpe.com +parameters: + csi.storage.k8s.io/fstype: xfs + csi.storage.k8s.io/provisioner-secret-name: custom-secret + csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage + csi.storage.k8s.io/controller-publish-secret-name: custom-secret + csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage + csi.storage.k8s.io/node-stage-secret-name: custom-secret + csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage + csi.storage.k8s.io/node-publish-secret-name: custom-secret + csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage + csi.storage.k8s.io/controller-expand-secret-name: custom-secret + csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage + cpg: SSD_r6 + provisioningType: tpvv + accessProtocol: iscsi + allowOverrides: cpg,provisioningType +allowVolumeExpansion: true \ No newline at end of file diff --git a/learn/persistent_storage/yaml/wordpress-mariadb-pvc.yaml b/learn/persistent_storage/yaml/wordpress-mariadb-pvc.yaml new file mode 100644 index 00000000..7b88cfc2 --- /dev/null +++ b/learn/persistent_storage/yaml/wordpress-mariadb-pvc.yaml @@ -0,0 +1,10 @@ +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: data-my-wordpress-mariadb-0 +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Gi diff --git a/learn/video_gallery/index.html b/learn/video_gallery/index.html new file mode 100644 index 00000000..4ce1f0ae --- /dev/null +++ b/learn/video_gallery/index.html @@ -0,0 +1,458 @@ + + + + + + + + + + + + + + + + + + Video Gallery - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEARN »
  • +
  • Video Gallery
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

Welcome to the Video Gallery. This is a collection of current YouTube assets that pertains to supported HPE primary storage container technologies.

+ +

CSI driver management

+

How to manage the components that surrounds driver deployment.

+

Managing multiple HPE storage backends using the HPE CSI Driver

+

This tutorial talks about managing multiple Secrets and StorageClasses to distinguish different backends.

+ + +

Watch on YouTube

+

Container Storage Providers

+

Each CSP has its own features and perks, learn about the different platforms right here.

+

HPE Alletra 9000 and Primera

+

Using the HPE CSI Driver with HPE Primera

+

This tutorial showcases a few of the HPE Primera specific features with the HPE CSI Driver.

+ + +

Watch on YouTube

+

Configuring HPE Primera Peer Persistence with the HPE CSI Operator for Kubernetes on Red Hat OpenShift

+

Learn how to configure HPE Primera Peer Persistence using the HPE CSI Driver.

+ + +

Watch on YouTube

+

HPE Alletra 5000/6000 and Nimble Storage

+

Using the HPE CSI Driver with HPE Nimble Storage

+

This tutorial showcases a few of the HPE Nimble Storage specific features with the HPE CSI Driver.

+ + +

Watch on YouTube

+

Manage multitenancy at scale with HPE Alletra 5000/6000 and Nimble Storage

+

This lightboard video discusses the advantages of using HPE Alletra 5000/6000 or Nimble Storage to handle multitenancy for storage resources between Kubernetes clusters.

+ + +

Watch on YouTube

+

Provisioning

+

The provisioning topic covers provisioning of storage resources on container orchestrators, such as volumes, snapshots and clones.

+

Dynamic Provisioning of Persistent Storage on Kubernetes

+

Learn the fundamentals of storage provisioning on Kubernetes.

+ + +

Watch on YouTube

+

HPE Developer Hack Shack Workshop: Using the Container Storage Interface

+

An interactive CSI workshop from HPE Discover Virtual Experience. It explains key provisioning concepts, including CSI snapshots and clones, ephemeral inline volumes, raw block volumes and how to use the NFS server provisioner.

+ + +

Watch on YouTube

+

Using the HPE CSI Driver to create CSI snapshots and clones

+

Learn how to use CSI snapshots and clones with the HPE CSI Driver.

+ + +

Watch on YouTube

+

Synchronize Volume Snapshots for Distributed Workloads

+

Explore how to take advantage of the HPE CSI Driver's exclusive features VolumeGroups and SnapshotGroups.

+ + +

Watch on YouTube

+ + +

Adapt stateful workloads dynamically with the HPE CSI Driver for Kubernetes

+

Learn how to use volume mutations to adapt stateful workloads with the HPE CSI Driver.

+ + +

Watch on YouTube

+

Partner Ecosystems

+

Joint solutions with our revered ecosystem partners.

+

Get started with Kasten K10 by Veeam and the HPE CSI Driver

+

This tutorial explains how to deploy the necessary components for Kasten K10 and how to perform snapshots and restores using the HPE CSI Driver.

+ + +

Watch on YouTube

+

Install the HPE CSI Operator for Kubernetes on Red Hat OpenShift

+

This tutorial goes through the steps of installing the HPE CSI Operator on Red Hat OpenShift.

+ + +

Watch on YouTube

+

Using HPE Primera and HPE Nimble Storage with the VMware Tanzu and vSphere CSI Driver

+

This tutorial shows how to use HPE storage with VMware Tanzu as well as how to configure the vSphere CSI Driver for Kubernetes clusters running on VMware leveraging HPE storage.

+ + +

Watch on YouTube

+

Monitoring, Metering and Diagnostics

+

Tutorials and demos showcasing monitoring and troubleshooting.

+

Get Started with the HPE Storage Array Exporter for Prometheus on Kubernetes

+

Learn how to stand up a Prometheus and Grafana environment on Kubernetes and start using the HPE Storage Array Exporter for Prometheus and the HPE CSI Info Metrics Provider for Prometheus to provide Monitoring and Alerting.

+ + +

Watch on YouTube

+

Use Cases

+

Lift and Transform Apps and Data with HPE Storage

+

This lightboard video discusses how to lift and transform applications running on traditional infrastructure over to Kubernetes using the HPE CSI Driver. Learn the details on what makes this possible in this HPE Developer blog post.

+ + +

Watch on YouTube

+

Watch more

+

A curated playlist of content related to HPE primary storage and containers is available on YouTube.

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/legacy/index.html b/legacy/index.html new file mode 100644 index 00000000..306036f5 --- /dev/null +++ b/legacy/index.html @@ -0,0 +1,268 @@ + + + + + + + + + + + + + + + + + + Docker, FlexVolume and CSPs - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEGACY DRIVERS AND PLUGINS »
  • +
  • Docker, FlexVolume and CSPs
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

These integrations are either already deprecated or being phased out. Please work with your HPE representative if you think you need to run any of these plugins and drivers.

+

Container Storage Providers

+ +

Legacy FlexVolume drivers

+ +

Docker Volume plugins

+ + +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + +
+ + + + + + + + diff --git a/legal/contributing/index.html b/legal/contributing/index.html new file mode 100644 index 00000000..02e9eff4 --- /dev/null +++ b/legal/contributing/index.html @@ -0,0 +1,313 @@ + + + + + + + + + + + + + + + + + + Contributing - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEGAL »
  • +
  • Contributing
  • +
  • +
  • +
+
+
+
+
+ +

Introduction

+

We welcome and encourage community contributions to SCOD.

+

Where to start?

+

The best way to directly collaborate with the project contributors is through GitHub: https://github.com/hpe-storage/scod

+
    +
  • If you want to contribute to our documentation by either fixing a typo or creating a page, please open a GitHub pull request.
  • +
  • If you want to raise an issue such as a defect, an enhancement request or a general issue, please open a GitHub issue.
  • +
+

Before you start writing, we recommend discussing your plans through a GitHub issue, especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your contribution, and help you find out if someone else is working on the same thing.

+

Note that all submissions from all contributors get reviewed. +After a pull request is made, other contributors will offer feedback. If the patch passes review, a maintainer will accept it with a comment. +When a pull request fails review, the author is expected to update the pull request to address the issue until it passes review and the pull request merges successfully.

+

At least one review from a maintainer is required for all patches.

+

Developer's Certificate of Origin

+

All contributions must include acceptance of the DCO:

+
+

Developer Certificate of Origin Version 1.1

+

Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 660 +York Street, Suite 102, San Francisco, CA 94110 USA

+

Everyone is permitted to copy and distribute verbatim copies of this +license document, but changing it is not allowed.

+

Developer's Certificate of Origin 1.1

+

By making a contribution to this project, I certify that:

+

(a) The contribution was created in whole or in part by me and I have +the right to submit it under the open source license indicated in the +file; or

+

(b) The contribution is based upon previous work that, to the best of my +knowledge, is covered under an appropriate open source license and I +have the right under that license to submit that work with +modifications, whether created in whole or in part by me, under the same +open source license (unless I am permitted to submit under a different +license), as indicated in the file; or

+

(c) The contribution was provided directly to me by some other person +who certified (a), (b) or (c) and I have not modified it.

+

(d) I understand and agree that this project and the contribution are +public and that a record of the contribution (including all personal +information I submit with it, including my sign-off) is maintained +indefinitely and may be redistributed consistent with this project or +the open source license(s) involved.

+
+

Sign your work

+

To accept the DCO, simply add this line to each commit message with your +name and email address (git commit -s will do this for you):

+

Signed-off-by: Jane Example <jane@example.com>
+

+

For legal reasons, no anonymous or pseudonymous contributions are +accepted.

+

Submitting Pull Requests

+

We encourage and support contributions from the community. No fix is too +small. We strive to process all pull requests as soon as possible and +with constructive feedback. If your pull request is not accepted at +first, please try again after addressing the feedback you received.

+

To make a pull request you will need a GitHub account. For help, see +GitHub's documentation on forking and pull requests.

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/legal/license/index.html b/legal/license/index.html new file mode 100644 index 00000000..0db524ae --- /dev/null +++ b/legal/license/index.html @@ -0,0 +1,449 @@ + + + + + + + + + + + + + + + + + + License - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEGAL »
  • +
  • License
  • +
  • +
  • +
+
+
+
+
+ +

                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/legal/notices/index.html b/legal/notices/index.html new file mode 100644 index 00000000..7a8a12d4 --- /dev/null +++ b/legal/notices/index.html @@ -0,0 +1,1494 @@ + + + + + + + + + + + + + + + + + + Notices - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEGAL »
  • +
  • Notices
  • +
  • +
  • +
+
+
+
+
+ +

Attributions for third party components.

+ +

HPE CSI Info Metrics Provider for Prometheus

+

HPE CSI Info Metrics Provider for Prometheus
+Copyright 2020-2024 Hewlett Packard Enterprise Development LP
+
+This product contains the following third party componenets:
+
+Google Cloud Go
+cloud.google.com/go
+Licensed under the Apache-2.0 license
+
+mtl
+dmitri.shuralyov.com/gpu/mtl
+Licensed under the BSD-3-Clause license
+
+go-autorest
+github.com/Azure/go-autorest
+Licensed under the Apache-2.0 license
+
+Tom's Obvious Minimal Language
+github.com/BurntSushi/toml
+Licensed under the MIT license
+
+X Go Binding
+github.com/BurntSushi/xgb
+Licensed under the BSD-3-Clause license
+
+Gzip Handler
+github.com/NYTimes/gziphandler
+Licensed under the Apache-2.0 license
+
+Purell
+github.com/PuerkitoBio/purell
+Licensed under the BSD-3-Clause license 
+
+urlesc
+github.com/PuerkitoBio/urlesc
+Licensed under the BSD-3-Clause license
+
+text/template
+github.com/alecthomas/template
+Licensed under the BSD-3-Clause license
+
+Units
+github.com/alecthomas/units
+Licensed under the MIT license
+
+govalidator
+github.com/asaskevich/govalidator
+Licensed under the MIT license
+
+Perks for Go
+github.com/beorn7/perks
+Licensed under the MIT license
+
+OpenCensus Proto
+github.com/census-instrumentation/opencensus-proto
+Licensed under the Apache-2.0 license
+
+xxhash
+github.com/cespare/xxhash/v2
+Licensed under the MIT license
+
+Logex
+github.com/chzyer/logex
+Licensed under the MIT license 
+
+ReadLine
+github.com/chzyer/readline
+Licensed under the MIT license
+
+test
+github.com/chzyer/test
+Licensed under the MIT license 
+
+misspell
+github.com/client9/misspell
+Licensed under the MIT license
+
+pty
+github.com/creack/pty
+Licensed under the MIT license 
+
+go-spew
+github.com/davecgh/go-spew
+Licensed under the ISC license
+
+docopt-go
+github.com/docopt/docopt-go
+Licensed under the MIT license
+
+goproxy
+github.com/elazarl/goproxy
+Licensed under the BSD-3-Clause license
+
+go-restful
+github.com/emicklei/go-restful
+Licensed under the MIT license
+
+control-plane
+github.com/envoyproxy/go-control-plane
+Licensed under the Apache-2.0 license 
+
+protec-gen-validate (PGV)
+github.com/envoyproxy/protoc-gen-validate
+Licensed under the Apache-2.0 license
+
+JSON-Patch
+github.com/evanphx/json-patch
+Licensed under the BSD-3-Clause license
+
+jwt-go
+github.com/form3tech-oss/jwt-go
+Licensed under the MIT license
+
+File system notifications for Go
+github.com/fsnotify/fsnotify
+Licensed under the BSD-3-Clause license
+
+GLFW for Go
+github.com/go-gl/glfw
+Licensed under the BSD-3-Clause license
+
+Go kit
+github.com/go-kit/kit
+Licensed under the MIT license
+
+package log
+github.com/go-kit/log
+Licensed under the MIT license
+
+logfmt
+github.com/go-logfmt/logfmt
+Licensed under the MIT license
+
+logr, A minimal logging API for Go
+github.com/go-logr/logr
+Licensed under the Apache-2.0 license
+
+gojsonpointer
+github.com/go-openapi/jsonpointer
+Licensed under the Apache-2.0 license
+
+gojsonreference
+github.com/go-openapi/jsonreference
+Licensed under the Apache-2.0 license
+
+OAI object model
+github.com/go-openapi/spec
+Licensed under the Apache-2.0 license
+
+Swag
+github.com/go-openapi/swag
+Licensed under the Apache-2.0 license
+
+stack
+github.com/go-stack/stack
+Licensed under the MIT license
+
+Protocol Buffers for Go with Gadgets
+github.com/gogo/protobuf
+Licensed under the BSD-3-Clause license
+
+glog
+github.com/golang/glog
+Licensed under the Apache-2.0 license
+
+groupcache
+github.com/golang/groupcache
+Licensed under the Apache-2.0 license
+
+gomock
+github.com/golang/mock
+Licensed under the Apache-2.0 license
+
+Go support for Protocol Buffers
+github.com/golang/protobuf
+Licensed under the BSD-3-Clause license
+
+BTree implementation for Go
+github.com/google/btree
+Licensed under the Apache-2.0 license
+
+Package for equality of Go values
+github.com/google/go-cmp
+Licensed under the BSD-3-Clause license
+
+gofuzz
+github.com/google/gofuzz
+Licensed under the Apache-2.0 license
+
+Martian Proxy
+github.com/google/martian
+Licensed under the Apache-2.0 license
+
+pprof
+github.com/google/pprof
+Licensed under the Apache-2.0 license
+
+renameio
+github.com/google/renameio
+Licensed under the Apache-2.0 license
+
+uuid
+github.com/google/uuid
+Licensed under the BSD-3-Clause license
+
+Google API Extensions for Go
+github.com/googleapis/gax-go/v2
+Licensed under the BSD-3-Clause license
+
+gnostic
+github.com/googleapis/gnostic
+Licensed under the Apache-2.0 license
+
+Gorilla WebSocket
+github.com/gorilla/websocket
+Licensed under the BSD-2-Clause license
+
+httpcache
+github.com/gregjones/httpcache
+Licensed under the MIT license
+
+golang-lru
+github.com/hashicorp/golang-lru
+Licensed under the MPL-2.0 license
+
+Go package for tail-ing files
+github.com/hpcloud/tail
+Licensed under the MIT license
+
+demangle
+github.com/ianlancetaylor/demangle
+Licensed under the BSD-3-Clause license
+
+Mergo
+github.com/imdario/mergo
+Licensed under the BSD-3-Clause license
+
+Backoff
+github.com/jpillora/backoff
+Licensed under the MIT license
+
+json-iterator
+github.com/json-iterator/go
+Licensed under the MIT license
+
+go-junit-report
+github.com/jstemmer/go-junit-report
+Licensed under the MIT license
+
+errcheck
+github.com/kisielk/errcheck
+Licensed under the MIT license
+
+gotool
+github.com/kisielk/gotool
+Licensed under the MIT license
+
+Windows Terminal Sequences
+github.com/konsorten/go-windows-terminal-sequences
+Licensed under the MIT license
+
+logfmt
+github.com/kr/logfmt
+Licensed under the MIT license
+
+pretty
+github.com/kr/pretty
+Licensed under the MIT license
+
+pty
+github.com/kr/pty
+Licensed under the MIT license
+
+text
+github.com/kr/text
+Licensed under the MIT license
+
+easyjson
+github.com/mailru/easyjson
+Licensed under the MIT license
+
+golang protobuf extensions
+github.com/matttproud/golang_protobuf_extensions
+Licensed under the Apache-2.0 license with the notice:
+Copyright 2012 Matt T. Proud (matt.proud@gmail.com)
+
+mapstructure
+github.com/mitchellh/mapstructure
+Licensed under the MIT license
+
+SpdyStream
+github.com/moby/spdystream
+Licensed under the Apache-2.0 license with the notice:
+SpdyStream
+Copyright 2014-2021 Docker Inc.
+
+This product includes software developed at
+Docker Inc. (https://www.docker.com/).
+
+concurrent
+github.com/modern-go/concurrent
+Licensed under the Apache-2.0 license
+
+reflect2
+github.com/modern-go/reflect2
+Licensed under the Apache-2.0 license
+
+goautoneg
+github.com/munnerz/goautoneg
+Licensed under the BSD-3-Clause license
+
+Go tracing and monitoring (Prometheus) for net.Conn
+github.com/mwitkow/go-conntrack
+Licensed under the Apache-2.0 license
+
+Data Flow Rate Control
+github.com/mxk/go-flowrate
+Licensed under the BSD-3-Clause license
+
+pretty
+github.com/niemeyer/pretty
+Licensed under the MIT license
+
+Ginkgo
+github.com/onsi/ginkgo
+Licensed under the MIT license
+
+Gomega
+github.com/onsi/gomega
+Licensed under the MIT license
+
+diskv
+github.com/peterbourgon/diskv
+Licensed under the MIT license
+
+errors
+github.com/pkg/errors
+Licensed under the BSD-2-Clause license
+
+go-difflib
+github.com/pmezard/go-difflib
+Licensed under the BSD-3-Clause license
+
+Prometheus Go client library
+github.com/prometheus/client_golang
+Licensed under the Apache-2.0 license with the following notice:
+Prometheus instrumentation library for Go applications
+Copyright 2012-2015 The Prometheus Authors
+
+This product includes software developed at
+SoundCloud Ltd. (http://soundcloud.com/).
+
+
+The following components are included in this product:
+
+perks - a fork of https://github.com/bmizerany/perks
+https://github.com/beorn7/perks
+Copyright 2013-2015 Blake Mizerany, Björn Rabenstein
+See https://github.com/beorn7/perks/blob/master/README.md for license details.
+
+Go support for Protocol Buffers - Google's data interchange format
+http://github.com/golang/protobuf/
+Copyright 2010 The Go Authors
+See source code for license details.
+
+Support for streaming Protocol Buffer messages for the Go language (golang).
+https://github.com/matttproud/golang_protobuf_extensions
+Copyright 2013 Matt T. Proud
+Licensed under the Apache License, Version 2.0
+
+Prometheus Go client model
+github.com/prometheus/client_model
+Licensed under the Apache-2.0 license with the following notice:
+Data model artifacts for Prometheus.
+Copyright 2012-2015 The Prometheus Authors
+
+This product includes software developed at
+SoundCloud Ltd. (http://soundcloud.com/).
+
+Common
+github.com/prometheus/common
+Licensed under the Apache-2.0 license with the following notice:
+Common libraries shared by Prometheus Go components.
+Copyright 2015 The Prometheus Authors
+
+This product includes software developed at
+SoundCloud Ltd. (http://soundcloud.com/).
+
+procfs
+github.com/prometheus/procfs
+Licensed under the Apache-2.0 license with the following notice:
+procfs provides functions to retrieve system, kernel and process
+metrics from the pseudo-filesystem proc.
+
+Copyright 2014-2015 The Prometheus Authors
+
+This product includes software developed at
+SoundCloud Ltd. (http://soundcloud.com/).
+
+go-internal
+github.com/rogpeppe/go-internal
+Licensed under the BSD-3-Clause license
+
+Logrus
+github.com/sirupsen/logrus
+Licensed under the MIT license
+
+AFERO
+github.com/spf13/afero
+Licensed under the Apache-2.0 license
+
+pflag
+github.com/spf13/pflag
+Licensed under the BSD-3-Clause license
+
+Objx
+github.com/stretchr/objx
+Licensed under the MIT license
+
+Testify
+github.com/stretchr/testify
+Licensed under the MIT license
+
+goldmark
+github.com/yuin/goldmark
+Licensed under the MIT license
+
+OpenCensus Libraries for Go
+go.opencensus.io
+Licensed under the Apache-2.0 license
+
+Go Cryptography
+golang.org/x/crypto
+Licensed under the BSD-3-Clause license
+
+exp
+golang.org/x/exp
+Licensed under the BSD-3-Clause license
+
+Go Images
+golang.org/x/image
+Licensed under the BSD-3-Clause license
+
+lint
+golang.org/x/lint
+Licensed under the BSD-3-Clause license
+
+Go support for Mobile devices
+golang.org/x/mobile
+Licensed under the BSD-3-Clause license
+
+mod
+golang.org/x/mod
+Licensed under the BSD-3-Clause license
+
+Go Networking
+golang.org/x/net
+Licensed under the BSD-3-Clause license
+
+OAuth2 for Go
+golang.org/x/oauth2
+Licensed under the BSD-3-Clause license
+
+Go Sync
+golang.org/x/sync
+Licensed under the BSD-3-Clause license
+
+sys
+golang.org/x/sys
+Licensed under the BSD-3-Clause license
+
+Go terminal/console support
+golang.org/x/term
+Licensed under the BSD-3-Clause license
+
+Go Text
+golang.org/x/text
+Licensed under the BSD-3-Clause license
+
+Go Time
+golang.org/x/time
+Licensed under the BSD-3-Clause license
+
+Go Tools
+golang.org/x/tools
+Licensed under the BSD-3-Clause license
+
+xerrors
+golang.org/x/xerrors
+Licensed under the BSD-3-Clause license
+
+Google APIs Client Library for Go
+google.golang.org/api
+Licensed under the BSD-3-Clause license
+
+Go App Engine packages
+google.golang.org/appengine
+Licensed under the Apache-2.0 license
+
+Go generated proto packages
+google.golang.org/genproto
+Licensed under the Apache-2.0 license
+
+gRPC-Go
+google.golang.org/grpc
+Licensed under the Apache-2.0 license with the following notice:
+Copyright 2014 gRPC authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+Go support for Protocol Buffers
+google.golang.org/protobuf
+Licensed under the BSD-3-Clause license
+
+Kingpin - A Go (golang) command line and flag parser
+gopkg.in/alecthomas/kingpin.v2
+Licensed under the MIT license
+
+check
+gopkg.in/check.v1
+Licensed under the BSD-3-Clause license
+
+errgo
+gopkg.in/errgo.v2
+Licensed under the BSD-3-Clause license
+
+File system notifications for Go
+gopkg.in/fsnotify.v1
+Licensed under the BSD-3-Clause license
+
+inf
+gopkg.in/inf.v0
+Licensed under the BSD-3-Clause license
+
+lumberjack
+gopkg.in/natefinch/lumberjack.v2
+Licensed under the MIT license
+
+tomb
+gopkg.in/tomb.v1
+Licensed under the BSD-3-Clause license
+
+gopkg.in/yaml.v2
+Licensed under the Apache-2.0 license with the following notice:
+Copyright 2011-2016 Canonical Ltd.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+YAML support for the Go language
+gopkg.in/yaml.v3
+Licensed under the Apache-2.0 license with the following notice:
+Copyright 2011-2016 Canonical Ltd.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+go-tools
+honnef.co/go/tools
+Licensed under the MIT license
+
+api
+k8s.io/api
+Licensed under the Apache-2.0 license
+
+apimachinery
+k8s.io/apimachinery
+Licensed under the Apache-2.0 license
+
+client-go
+k8s.io/client-go
+Licensed under the Apache-2.0 license
+
+gengo
+k8s.io/gengo
+Licensed under the Apache-2.0 license
+
+klog
+k8s.io/klog/v2
+Licensed under the Apache-2.0 license
+
+kube-openapi
+k8s.io/kube-openapi
+Licensed under the Apache-2.0 license
+
+utils
+k8s.io/utils
+Licensed under the Apache-2.0 license
+
+binaryregexp
+rsc.io/binaryregexp
+Licensed under the BSD-3-Clause license
+
+quote
+rsc.io/quote/v3
+Licensed under the BSD-3-Clause license
+
+sampler
+rsc.io/sampler
+Licensed under the BSD-3-Clause license
+
+Structured Merge and Diff
+sigs.k8s.io/structured-merge-diff/v4
+Licensed under the Apache-2.0 license
+
+YAML marshaling and unmarshaling support for Go
+sigs.k8s.io/yaml
+Licensed under the MIT license
+
+
+Licenses:
+MIT License
+Permission is hereby granted, free of charge, to any person obtaining a copy of 
+this software and associated documentation files (the "Software"), to deal in 
+the Software without restriction, including without limitation the rights to 
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies 
+of the Software, and to permit persons to whom the Software is furnished to do 
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all 
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 
+SOFTWARE.
+
+Apache License
+Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+1. Definitions.
+
+"License" shall mean the terms and conditions for use, reproduction, and 
+distribution as defined by Sections 1 through 9 of this document.
+
+"Licensor" shall mean the copyright owner or entity authorized by the copyright 
+owner that is granting the License.
+
+"Legal Entity" shall mean the union of the acting entity and all other entities 
+that control, are controlled by, or are under common control with that entity. 
+For the purposes of this definition, "control" means (i) the power, direct or 
+indirect, to cause the direction or management of such entity, whether by 
+contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the 
+outstanding shares, or (iii) beneficial ownership of such entity.
+
+"You" (or "Your") shall mean an individual or Legal Entity exercising 
+permissions granted by this License.
+
+"Source" form shall mean the preferred form for making modifications, including 
+but not limited to software source code, documentation source, and 
+configuration files.
+
+"Object" form shall mean any form resulting from mechanical transformation or 
+translation of a Source form, including but not limited to compiled object 
+code, generated documentation, and conversions to other media types.
+
+"Work" shall mean the work of authorship, whether in Source or Object form, 
+made available under the License, as indicated by a copyright notice that is 
+included in or attached to the work (an example is provided in the Appendix 
+below).
+
+"Derivative Works" shall mean any work, whether in Source or Object form, that 
+is based on (or derived from) the Work and for which the editorial revisions, 
+annotations, elaborations, or other modifications represent, as a whole, an 
+original work of authorship. For the purposes of this License, Derivative Works 
+shall not include works that remain separable from, or merely link (or bind by 
+name) to the interfaces of, the Work and Derivative Works thereof.
+
+"Contribution" shall mean any work of authorship, including the original 
+version of the Work and any modifications or additions to that Work or 
+Derivative Works thereof, that is intentionally submitted to Licensor for 
+inclusion in the Work by the copyright owner or by an individual or Legal 
+Entity authorized to submit on behalf of the copyright owner. For the purposes 
+of this definition, "submitted" means any form of electronic, verbal, or 
+written communication sent to the Licensor or its representatives, including 
+but not limited to communication on electronic mailing lists, source code 
+control systems, and issue tracking systems that are managed by, or on behalf 
+of, the Licensor for the purpose of discussing and improving the Work, but 
+excluding communication that is conspicuously marked or otherwise designated in 
+writing by the copyright owner as "Not a Contribution."
+
+"Contributor" shall mean Licensor and any individual or Legal Entity on behalf 
+of whom a Contribution has been received by Licensor and subsequently 
+incorporated within the Work.
+
+2. Grant of Copyright License. Subject to the terms and conditions of this 
+License, each Contributor hereby grants to You a perpetual, worldwide, 
+non-exclusive, no-charge, royalty-free, irrevocable copyright license to 
+reproduce, prepare Derivative Works of, publicly display, publicly perform, 
+sublicense, and distribute the Work and such Derivative Works in Source or 
+Object form.
+
+3. Grant of Patent License. Subject to the terms and conditions of this 
+License, each Contributor hereby grants to You a perpetual, worldwide, 
+non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this 
+section) patent license to make, have made, use, offer to sell, sell, import, 
+and otherwise transfer the Work, where such license applies only to those 
+patent claims licensable by such Contributor that are necessarily infringed by 
+their Contribution(s) alone or by combination of their Contribution(s) with the 
+Work to which such Contribution(s) was submitted. If You institute patent 
+litigation against any entity (including a cross-claim or counterclaim in a 
+lawsuit) alleging that the Work or a Contribution incorporated within the Work 
+constitutes direct or contributory patent infringement, then any patent 
+licenses granted to You under this License for that Work shall terminate as of 
+the date such litigation is filed.
+
+4. Redistribution. You may reproduce and distribute copies of the Work or 
+Derivative Works thereof in any medium, with or without modifications, and in 
+Source or Object form, provided that You meet the following conditions:
+
+You must give any other recipients of the Work or Derivative Works a copy of 
+this License; and
+You must cause any modified files to carry prominent notices stating that You 
+changed the files; and
+You must retain, in the Source form of any Derivative Works that You 
+distribute, all copyright, patent, trademark, and attribution notices from the 
+Source form of the Work, excluding those notices that do not pertain to any 
+part of the Derivative Works; and
+If the Work includes a "NOTICE" text file as part of its distribution, then any 
+Derivative Works that You distribute must include a readable copy of the 
+attribution notices contained within such NOTICE file, excluding those notices 
+that do not pertain to any part of the Derivative Works, in at least one of the 
+following places: within a NOTICE text file distributed as part of the 
+Derivative Works; within the Source form or documentation, if provided along 
+with the Derivative Works; or, within a display generated by the Derivative 
+Works, if and wherever such third-party notices normally appear. The contents 
+of the NOTICE file are for informational purposes only and do not modify the 
+License. You may add Your own attribution notices within Derivative Works that 
+You distribute, alongside or as an addendum to the NOTICE text from the Work, 
+provided that such additional attribution notices cannot be construed as 
+modifying the License.
+
+You may add Your own copyright statement to Your modifications and may provide 
+additional or different license terms and conditions for use, reproduction, or 
+distribution of Your modifications, or for any such Derivative Works as a 
+whole, provided Your use, reproduction, and distribution of the Work otherwise 
+complies with the conditions stated in this License.
+5. Submission of Contributions. Unless You explicitly state otherwise, any 
+Contribution intentionally submitted for inclusion in the Work by You to the 
+Licensor shall be under the terms and conditions of this License, without any 
+additional terms or conditions. Notwithstanding the above, nothing herein shall 
+supersede or modify the terms of any separate license agreement you may have 
+executed with Licensor regarding such Contributions.
+
+6. Trademarks. This License does not grant permission to use the trade names, 
+trademarks, service marks, or product names of the Licensor, except as required 
+for reasonable and customary use in describing the origin of the Work and 
+reproducing the content of the NOTICE file.
+
+7. Disclaimer of Warranty. Unless required by applicable law or agreed to in 
+writing, Licensor provides the Work (and each Contributor provides its 
+Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
+KIND, either express or implied, including, without limitation, any warranties 
+or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 
+PARTICULAR PURPOSE. You are solely responsible for determining the 
+appropriateness of using or redistributing the Work and assume any risks 
+associated with Your exercise of permissions under this License.
+
+8. Limitation of Liability. In no event and under no legal theory, whether in 
+tort (including negligence), contract, or otherwise, unless required by 
+applicable law (such as deliberate and grossly negligent acts) or agreed to in 
+writing, shall any Contributor be liable to You for damages, including any 
+direct, indirect, special, incidental, or consequential damages of any 
+character arising as a result of this License or out of the use or inability to 
+use the Work (including but not limited to damages for loss of goodwill, work 
+stoppage, computer failure or malfunction, or any and all other commercial 
+damages or losses), even if such Contributor has been advised of the 
+possibility of such damages.
+
+9. Accepting Warranty or Additional Liability. While redistributing the Work or 
+Derivative Works thereof, You may choose to offer, and charge a fee for, 
+acceptance of support, warranty, indemnity, or other liability obligations 
+and/or rights consistent with this License. However, in accepting such 
+obligations, You may act only on Your own behalf and on Your sole 
+responsibility, not on behalf of any other Contributor, and only if You agree 
+to indemnify, defend, and hold each Contributor harmless for any liability 
+incurred by, or claims asserted against, such Contributor by reason of your 
+accepting any such warranty or additional liability.
+
+END OF TERMS AND CONDITIONS
+
+BSD-3-Clause License
+Redistribution and use in source and binary forms, with or without 
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this 
+list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice, 
+this list of conditions and the following disclaimer in the documentation 
+and/or other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its contributors 
+may be used to endorse or promote products derived from this software without 
+specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+BSD-2-Clause License
+Redistribution and use in source and binary forms, with or without 
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this 
+list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice, 
+this list of conditions and the following disclaimer in the documentation 
+and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ISC License
+Permission to use, copy, modify, and/or distribute this software for any 
+purpose with or without fee is hereby granted, provided that the above 
+copyright notice and this permission notice appear in all copies.
+
+THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH 
+REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND 
+FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, 
+INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM 
+LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR 
+OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR 
+PERFORMANCE OF THIS SOFTWARE.
+
+Mozilla Public License, version 2.0
+1. Definitions
+
+1.1. "Contributor"
+
+     means each individual or legal entity that creates, contributes to the
+     creation of, or owns Covered Software.
+
+1.2. "Contributor Version"
+
+     means the combination of the Contributions of others (if any) used by a
+     Contributor and that particular Contributor's Contribution.
+
+1.3. "Contribution"
+
+     means Covered Software of a particular Contributor.
+
+1.4. "Covered Software"
+
+     means Source Code Form to which the initial Contributor has attached the
+     notice in Exhibit A, the Executable Form of such Source Code Form, and
+     Modifications of such Source Code Form, in each case including portions
+     thereof.
+
+1.5. "Incompatible With Secondary Licenses"
+     means
+
+     a. that the initial Contributor has attached the notice described in
+        Exhibit B to the Covered Software; or
+
+     b. that the Covered Software was made available under the terms of
+        version 1.1 or earlier of the License, but not also under the terms of
+        a Secondary License.
+
+1.6. "Executable Form"
+
+     means any form of the work other than Source Code Form.
+
+1.7. "Larger Work"
+
+     means a work that combines Covered Software with other material, in a
+     separate file or files, that is not Covered Software.
+
+1.8. "License"
+
+     means this document.
+
+1.9. "Licensable"
+
+     means having the right to grant, to the maximum extent possible, whether
+     at the time of the initial grant or subsequently, any and all of the
+     rights conveyed by this License.
+
+1.10. "Modifications"
+
+     means any of the following:
+
+     a. any file in Source Code Form that results from an addition to,
+        deletion from, or modification of the contents of Covered Software; or
+
+     b. any new file in Source Code Form that contains any Covered Software.
+
+1.11. "Patent Claims" of a Contributor
+
+      means any patent claim(s), including without limitation, method,
+      process, and apparatus claims, in any patent Licensable by such
+      Contributor that would be infringed, but for the grant of the License,
+      by the making, using, selling, offering for sale, having made, import,
+      or transfer of either its Contributions or its Contributor Version.
+
+1.12. "Secondary License"
+
+      means either the GNU General Public License, Version 2.0, the GNU Lesser
+      General Public License, Version 2.1, the GNU Affero General Public
+      License, Version 3.0, or any later versions of those licenses.
+
+1.13. "Source Code Form"
+
+      means the form of the work preferred for making modifications.
+
+1.14. "You" (or "Your")
+
+      means an individual or a legal entity exercising rights under this
+      License. For legal entities, "You" includes any entity that controls, is
+      controlled by, or is under common control with You. For purposes of this
+      definition, "control" means (a) the power, direct or indirect, to cause
+      the direction or management of such entity, whether by contract or
+      otherwise, or (b) ownership of more than fifty percent (50%) of the
+      outstanding shares or beneficial ownership of such entity.
+
+
+2. License Grants and Conditions
+
+2.1. Grants
+
+     Each Contributor hereby grants You a world-wide, royalty-free,
+     non-exclusive license:
+
+     a. under intellectual property rights (other than patent or trademark)
+        Licensable by such Contributor to use, reproduce, make available,
+        modify, display, perform, distribute, and otherwise exploit its
+        Contributions, either on an unmodified basis, with Modifications, or
+        as part of a Larger Work; and
+
+     b. under Patent Claims of such Contributor to make, use, sell, offer for
+        sale, have made, import, and otherwise transfer either its
+        Contributions or its Contributor Version.
+
+2.2. Effective Date
+
+     The licenses granted in Section 2.1 with respect to any Contribution
+     become effective for each Contribution on the date the Contributor first
+     distributes such Contribution.
+
+2.3. Limitations on Grant Scope
+
+     The licenses granted in this Section 2 are the only rights granted under
+     this License. No additional rights or licenses will be implied from the
+     distribution or licensing of Covered Software under this License.
+     Notwithstanding Section 2.1(b) above, no patent license is granted by a
+     Contributor:
+
+     a. for any code that a Contributor has removed from Covered Software; or
+
+     b. for infringements caused by: (i) Your and any other third party's
+        modifications of Covered Software, or (ii) the combination of its
+        Contributions with other software (except as part of its Contributor
+        Version); or
+
+     c. under Patent Claims infringed by Covered Software in the absence of
+        its Contributions.
+
+     This License does not grant any rights in the trademarks, service marks,
+     or logos of any Contributor (except as may be necessary to comply with
+     the notice requirements in Section 3.4).
+
+2.4. Subsequent Licenses
+
+     No Contributor makes additional grants as a result of Your choice to
+     distribute the Covered Software under a subsequent version of this
+     License (see Section 10.2) or under the terms of a Secondary License (if
+     permitted under the terms of Section 3.3).
+
+2.5. Representation
+
+     Each Contributor represents that the Contributor believes its
+     Contributions are its original creation(s) or it has sufficient rights to
+     grant the rights to its Contributions conveyed by this License.
+
+2.6. Fair Use
+
+     This License is not intended to limit any rights You have under
+     applicable copyright doctrines of fair use, fair dealing, or other
+     equivalents.
+
+2.7. Conditions
+
+     Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
+     Section 2.1.
+
+
+3. Responsibilities
+
+3.1. Distribution of Source Form
+
+     All distribution of Covered Software in Source Code Form, including any
+     Modifications that You create or to which You contribute, must be under
+     the terms of this License. You must inform recipients that the Source
+     Code Form of the Covered Software is governed by the terms of this
+     License, and how they can obtain a copy of this License. You may not
+     attempt to alter or restrict the recipients' rights in the Source Code
+     Form.
+
+3.2. Distribution of Executable Form
+
+     If You distribute Covered Software in Executable Form then:
+
+     a. such Covered Software must also be made available in Source Code Form,
+        as described in Section 3.1, and You must inform recipients of the
+        Executable Form how they can obtain a copy of such Source Code Form by
+        reasonable means in a timely manner, at a charge no more than the cost
+        of distribution to the recipient; and
+
+     b. You may distribute such Executable Form under the terms of this
+        License, or sublicense it under different terms, provided that the
+        license for the Executable Form does not attempt to limit or alter the
+        recipients' rights in the Source Code Form under this License.
+
+3.3. Distribution of a Larger Work
+
+     You may create and distribute a Larger Work under terms of Your choice,
+     provided that You also comply with the requirements of this License for
+     the Covered Software. If the Larger Work is a combination of Covered
+     Software with a work governed by one or more Secondary Licenses, and the
+     Covered Software is not Incompatible With Secondary Licenses, this
+     License permits You to additionally distribute such Covered Software
+     under the terms of such Secondary License(s), so that the recipient of
+     the Larger Work may, at their option, further distribute the Covered
+     Software under the terms of either this License or such Secondary
+     License(s).
+
+3.4. Notices
+
+     You may not remove or alter the substance of any license notices
+     (including copyright notices, patent notices, disclaimers of warranty, or
+     limitations of liability) contained within the Source Code Form of the
+     Covered Software, except that You may alter any license notices to the
+     extent required to remedy known factual inaccuracies.
+
+3.5. Application of Additional Terms
+
+     You may choose to offer, and to charge a fee for, warranty, support,
+     indemnity or liability obligations to one or more recipients of Covered
+     Software. However, You may do so only on Your own behalf, and not on
+     behalf of any Contributor. You must make it absolutely clear that any
+     such warranty, support, indemnity, or liability obligation is offered by
+     You alone, and You hereby agree to indemnify every Contributor for any
+     liability incurred by such Contributor as a result of warranty, support,
+     indemnity or liability terms You offer. You may include additional
+     disclaimers of warranty and limitations of liability specific to any
+     jurisdiction.
+
+4. Inability to Comply Due to Statute or Regulation
+
+   If it is impossible for You to comply with any of the terms of this License
+   with respect to some or all of the Covered Software due to statute,
+   judicial order, or regulation then You must: (a) comply with the terms of
+   this License to the maximum extent possible; and (b) describe the
+   limitations and the code they affect. Such description must be placed in a
+   text file included with all distributions of the Covered Software under
+   this License. Except to the extent prohibited by statute or regulation,
+   such description must be sufficiently detailed for a recipient of ordinary
+   skill to be able to understand it.
+
+5. Termination
+
+5.1. The rights granted under this License will terminate automatically if You
+     fail to comply with any of its terms. However, if You become compliant,
+     then the rights granted under this License from a particular Contributor
+     are reinstated (a) provisionally, unless and until such Contributor
+     explicitly and finally terminates Your grants, and (b) on an ongoing
+     basis, if such Contributor fails to notify You of the non-compliance by
+     some reasonable means prior to 60 days after You have come back into
+     compliance. Moreover, Your grants from a particular Contributor are
+     reinstated on an ongoing basis if such Contributor notifies You of the
+     non-compliance by some reasonable means, this is the first time You have
+     received notice of non-compliance with this License from such
+     Contributor, and You become compliant prior to 30 days after Your receipt
+     of the notice.
+
+5.2. If You initiate litigation against any entity by asserting a patent
+     infringement claim (excluding declaratory judgment actions,
+     counter-claims, and cross-claims) alleging that a Contributor Version
+     directly or indirectly infringes any patent, then the rights granted to
+     You by any and all Contributors for the Covered Software under Section
+     2.1 of this License shall terminate.
+
+5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
+     license agreements (excluding distributors and resellers) which have been
+     validly granted by You or Your distributors under this License prior to
+     termination shall survive termination.
+
+6. Disclaimer of Warranty
+
+   Covered Software is provided under this License on an "as is" basis,
+   without warranty of any kind, either expressed, implied, or statutory,
+   including, without limitation, warranties that the Covered Software is free
+   of defects, merchantable, fit for a particular purpose or non-infringing.
+   The entire risk as to the quality and performance of the Covered Software
+   is with You. Should any Covered Software prove defective in any respect,
+   You (not any Contributor) assume the cost of any necessary servicing,
+   repair, or correction. This disclaimer of warranty constitutes an essential
+   part of this License. No use of  any Covered Software is authorized under
+   this License except under this disclaimer.
+
+7. Limitation of Liability
+
+   Under no circumstances and under no legal theory, whether tort (including
+   negligence), contract, or otherwise, shall any Contributor, or anyone who
+   distributes Covered Software as permitted above, be liable to You for any
+   direct, indirect, special, incidental, or consequential damages of any
+   character including, without limitation, damages for lost profits, loss of
+   goodwill, work stoppage, computer failure or malfunction, or any and all
+   other commercial damages or losses, even if such party shall have been
+   informed of the possibility of such damages. This limitation of liability
+   shall not apply to liability for death or personal injury resulting from
+   such party's negligence to the extent applicable law prohibits such
+   limitation. Some jurisdictions do not allow the exclusion or limitation of
+   incidental or consequential damages, so this exclusion and limitation may
+   not apply to You.
+
+8. Litigation
+
+   Any litigation relating to this License may be brought only in the courts
+   of a jurisdiction where the defendant maintains its principal place of
+   business and such litigation shall be governed by laws of that
+   jurisdiction, without reference to its conflict-of-law provisions. Nothing
+   in this Section shall prevent a party's ability to bring cross-claims or
+   counter-claims.
+
+9. Miscellaneous
+
+   This License represents the complete agreement concerning the subject
+   matter hereof. If any provision of this License is held to be
+   unenforceable, such provision shall be reformed only to the extent
+   necessary to make it enforceable. Any law or regulation which provides that
+   the language of a contract shall be construed against the drafter shall not
+   be used to construe this License against a Contributor.
+
+
+10. Versions of the License
+
+10.1. New Versions
+
+      Mozilla Foundation is the license steward. Except as provided in Section
+      10.3, no one other than the license steward has the right to modify or
+      publish new versions of this License. Each version will be given a
+      distinguishing version number.
+
+10.2. Effect of New Versions
+
+      You may distribute the Covered Software under the terms of the version
+      of the License under which You originally received the Covered Software,
+      or under the terms of any subsequent version published by the license
+      steward.
+
+10.3. Modified Versions
+
+      If you create software not governed by this License, and you want to
+      create a new license for such software, you may create and use a
+      modified version of this License if you rename the license and remove
+      any references to the name of the license steward (except to note that
+      such modified license differs from this License).
+
+10.4. Distributing Source Code Form that is Incompatible With Secondary
+      Licenses If You choose to distribute Source Code Form that is
+      Incompatible With Secondary Licenses under the terms of this version of
+      the License, the notice described in Exhibit B of this License must be
+      attached.
+
+Exhibit A - Source Code Form License Notice
+
+      This Source Code Form is subject to the
+      terms of the Mozilla Public License, v.
+      2.0. If a copy of the MPL was not
+      distributed with this file, You can
+      obtain one at
+      http://mozilla.org/MPL/2.0/.
+
+If it is not possible or desirable to put the notice in a particular file,
+then You may include the notice in a location (such as a LICENSE file in a
+relevant directory) where a recipient would be likely to look for such a
+notice.
+
+You may add additional accurate notices of copyright ownership.
+
+Exhibit B - "Incompatible With Secondary Licenses" Notice
+
+      This Source Code Form is "Incompatible
+      With Secondary Licenses", as defined by
+      the Mozilla Public License, v. 2.0.
+

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/legal/support/index.html b/legal/support/index.html new file mode 100644 index 00000000..dd635c16 --- /dev/null +++ b/legal/support/index.html @@ -0,0 +1,299 @@ + + + + + + + + + + + + + + + + + + Support - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • LEGAL »
  • +
  • Support
  • +
  • +
  • +
+
+
+
+
+ +

Statement

+

Software components documented on SCOD are generally covered with valid support contract on the HPE product being used. Terms and conditions may be found in the support contract. Please reach out to your official HPE representative or HPE partner for any uncertainties.

+

CSI Info Metrics Provider support

+

The HPE CSI Info Metrics Provider for Prometheus is supported by HPE when used with HPE storage arrays on valid support contracts. Send email to support@nimblestorage.com to get started with any issue that requires assistance. Engage your HPE representative for other means to contact HPE Storage support directly.

+

Container Storage Providers

+

Each Container Storage Provider (CSP) uses their own official support routes to resolve any issues with the HPE CSI Driver for Kuberernetes and the respective CSP.

+

HPE Alletra 5000/6000 and Nimble Storage Container Storage Provider support

+

This software is supported by HPE when used with HPE Nimble Storage arrays on valid support contracts. Please send an email to support@nimblestorage.com to get started with any issue you might need assistance with. Engage with your HPE representative for other means on how to get in touch with Nimble support directly.

+

The HPE Alletra 5000/6000 and Nimble Storage organization has made a commitment to our customers to exert reasonable effort in supporting any industry-standard configuration. We do not limit our customers to only what is explicitly listed on SPOCK or the Validated Configuration Matrix (VCM), which lists tested or verified configurations (what HPE Alletra 5000/6000 and Nimble Storage organization commonly refers to as "Qualified" Configurations). Essentially, this means that we will exert reasonable effort to support any industry-standard configuration up to the point where we find, or become aware of, an issue that requires some other course of action*.

+

Example cases where support may not be possible include:

+
    +
  • Configurations explicitly called out by SPOCK or the VCM as known not to work properly
  • +
  • An OS (legacy or otherwise) that does not support or contain functionality needed by the customer
  • +
  • A vendor that does not or will not support the requested functionality (either through a violation of their Best Practices or the product is End-of-Life/Support with that vendor)
  • +
+

* = In the event where other vendors need to be consulted, the HPE Nimble Support team will not disengage from the Support Action. HPE Nimble Support will continue to partner with the customer and other vendors to search for the correct answers to the issue.

+

HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Container Storage Provider support

+

Limited to the HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) only. Best effort support is available for the CSP for HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Storage with All-inclusive Single or Multi-System software and an active HPE Pointnext support agreement. Since HPE Pointnext support for the CSP is best effort only, any other support levels like Warranty, Foundation Care, Proactive Care, Proactive Care Advanced and Datacenter Care or other support levels do not apply. Best effort response times are based on local standard business days and working hours. If your location is outside the customary service zone, response time may be longer.

+ + + + + + + + + + + + + + + + + + + + + +
HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Hardware Contract TypePhone Number
Warranty and Foundation Care800-633-3600
Proactive Care (PC)866-211-5211
Datacenter Care (DC)888-751-2149
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/amazon_eks_anywhere/index.html b/partners/amazon_eks_anywhere/index.html new file mode 100644 index 00000000..4233980e --- /dev/null +++ b/partners/amazon_eks_anywhere/index.html @@ -0,0 +1,298 @@ + + + + + + + + + + + + + + + + + + Amazon EKS Anywhere - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • Amazon EKS Anywhere
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

Amazon Elastic Kubernetes Service (EKS) Anywhere allows customers to deploy Amazon EKS-D (Amazon Elastic Kubernetes Service Distro) on their private or non-AWS clouds. AWS users familiar with the ecosystem gain the ability to cross clouds and manage their Kubernetes estate in a single pane of glass.

+

This documentation outlines the limitations and considerations when using the HPE CSI Driver for Kubernetes when deployed on EKS-D.

+ +

Limitations

+

These limitations may be expanded or detracted in future releases of either Amazon EKS Anywhere or HPE CSI Driver.

+

Bottlerocket OS

+

The default Linux distribution AWS favors is Bottlerocket OS which is a container-optimized distribution. Due to the slim host library and binary surface, Bottlerocket OS does not include the necessary utilities to support SAN storage. This limitation can be tracked in this GitHub issue.

+
+

Note

+

Any other OS supported by EKS-A and is listed in the Compatibility and Support table is supported by the HPE CSI Driver.

+
+

EKS Anywhere on vSphere

+

Only iSCSI is supported as the HPE CSI Driver does not support NPIV which is required for virtual Fibre Channel host bus adapters (HBA). More information on this limitation is elaborated on in the VMware section on SCOD.

+

Due to VSphereMachineConfig VM templates only allow a single vNIC, no multipath redundancy is available to the host. Ensure network fault tolerance according to VMware best practices is available to the VM. Also keep in mind that the backend storage system needs to have a data interface in the same subnet as the HPE CSI Driver will not try to discover targets over routed networks.

+
+

Tip

+

The vSphere CSI Driver and HPE CSI Driver may co-exist in the same cluster but make sure there's only one default StorageClass configured before creating PersistentVolumeClaims. Please see the official Kubernetes documentation on how to change the default StorageClass.

+
+

Installation Considerations

+

EKS-D is a CNCF compliant Kubernetes distribution and no special steps are required to deploy the HPE CSI Driver for Kubernetes. It's crucial to ensure that the compute nodes run a supported OS and the version of Kubernetes is supported by the HPE CSI Driver. Check the Compatibility and Support table for more information.

+

Proceed to installation documentation:

+ + +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/canonical/index.html b/partners/canonical/index.html new file mode 100644 index 00000000..b6abb4f6 --- /dev/null +++ b/partners/canonical/index.html @@ -0,0 +1,338 @@ + + + + + + + + + + + + + + + + + + Canonical - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • Canonical
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

"Canonical Kubernetes is pure upstream and works on any cloud, from bare metal to public and edge. Deploy single node and multi-node clusters with Charmed Kubernetes and MicroK8s to support container orchestration, from testing to production. Both distributions bring the latest innovations from the Kubernetes community within a week of upstream release, allowing for time to learn, experiment and upskill."1

+
1 = quote from Canonical Kubernetes.
+


+HPE supports Ubuntu LTS releases along with recent upstream versions of Kubernetes for the HPE CSI Driver. As long as the CSI driver is installed on a supported host OS with a CNCF certified Kubernetes distribution, the solution is supported.

+

Both Charmed Kubernetes on private cloud and MicroK8s for edge has been field tested with the HPE CSI Driver for Kubernetes by HPE.

+ +

Charmed Kubernetes

+

Charmed Kubernetes is deployed with the Juju orchestration engine. Juju is capable of deploying and managing the full life-cycle of CNCF certified Kubernetes on various infrastructure providers, both private and public. Charmed utilize Ubuntu LTS for the node OS.

+

It's most relevant for HPE CSI Driver users when deployed on Canonical MAAS and VMware vSphere.

+

Notes on VMware vSphere

+
    +
  • It's important to keep in mind that only iSCSI is supported with the HPE CSI Driver on vSphere. If Fibre Channel is being used, consider deploying the vSphere CSI Driver instead.
  • +
  • When deploying Charmed Kubernetes with Juju, machines may only be deployed with a "primary-network" and an "external-network" option. The "primary-network" is used primarily for the Juju controller but may be dual purposed. In this situation, the machines will end up sharing the subnet with iSCSI data traffic (using a single network path) and application or control-plane traffic, this is sub-optimal from a performance and data availability (only one network path for iSCSI) perspective.
  • +
+
+

Note

+

Canonical MAAS has not been formally tested at this time to provide guidance but the solution is supported by HPE.

+
+

Installing the HPE CSI Driver on Charmed Kubernetes

+

No special considerations needs to be taken when installing the HPE CSI Driver on Charmed Kubernetes. It's recommended to use the Helm chart.

+ +

When the chart is installed, Add an HPE Storage Backend.

+

MicroK8s

+

MicroK8s is an opinionated lightweight fully certified CNCF Kubernetes distribution. It's easy to install and manage.

+

Notes on using MicroK8s with the HPE CSI Driver

+
    +
  • MicroK8s is only supported by the HPE CSI Driver on Ubuntu LTS releases at this time. It will most likely work on other Linux distributions.
  • +
+
+

Important

+

Older versions of MicroK8s did not allow the CSI driver privileged Pods and some tweaking may be needed in the controller-manager of MicroK8s. Please use a recent version of MicroK8s and Ubuntu LTS to avoid problems.

+
+

Installing the HPE CSI Driver on MicroK8s

+

As MicroK8s is installed with confinement using snap, the "kubeletRootDir" needs to be configured when installing the Helm chart or Operator. Advanced install with YAML is strongly discouraged.

+

Install the Helm chart:

+

microk8s helm install --create-namespace \
+  --set kubeletRootDir=/var/snap/microk8s/common/var/lib/kubelet \
+  -n hpe-storage my-hpe-csi-driver hpe-storage/hpe-csi-driver
+

+

Go ahead and Add an HPE Storage Backend.

+
+

Hint

+

When installing the chart on other Linux distributions than Ubuntu LTS, the "kubeletRootDir" will most likely differ.

+
+

Integration Guides

+

HPE and Canonical have partnered to create integration guides with Charmed Kubernetes for the different storage backends.

+ +

These integration guides are also available on ubuntu.com/engage.

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/cohesity/img/Cohesity-Recovery-Namespace-locationandrename.png b/partners/cohesity/img/Cohesity-Recovery-Namespace-locationandrename.png new file mode 100644 index 00000000..bd89c006 Binary files /dev/null and b/partners/cohesity/img/Cohesity-Recovery-Namespace-locationandrename.png differ diff --git a/partners/cohesity/img/Cohesity_Protection-RunDetails-view.png b/partners/cohesity/img/Cohesity_Protection-RunDetails-view.png new file mode 100644 index 00000000..bbf86ca9 Binary files /dev/null and b/partners/cohesity/img/Cohesity_Protection-RunDetails-view.png differ diff --git a/partners/cohesity/img/overview.png b/partners/cohesity/img/overview.png new file mode 100644 index 00000000..4a8e3379 Binary files /dev/null and b/partners/cohesity/img/overview.png differ diff --git a/partners/cohesity/img/register_k8s.png b/partners/cohesity/img/register_k8s.png new file mode 100644 index 00000000..2c2b321f Binary files /dev/null and b/partners/cohesity/img/register_k8s.png differ diff --git a/partners/cohesity/index.html b/partners/cohesity/index.html new file mode 100644 index 00000000..c255726f --- /dev/null +++ b/partners/cohesity/index.html @@ -0,0 +1,366 @@ + + + + + + + + + + + + + + + + + + Cohesity - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • Cohesity
  • +
  • +
  • +
+
+
+
+
+ +

Cohesity

+

Hewlett Packard Enterprise and Cohesity offer an integrated approach to solve customer problems commonly found with containerized workloads. HPE Alletra—leveraging the HPE CSI Driver for Kubernetes—together with Cohesity's comprehensive data protection capabilities, empower organizations to overcome challenges associated with containerized environments.

+

This guide will demonstrate the steps to integrate Cohesity into a Kubernetes cluster and how to configure a protection policy to back up an application Namespace, a Kubernetes resource type. It proceeds to show that a backup can be restored to a new Namespace, useful for providing a test/development environment without affecting the original application Namespace.

+

External HPE Resources:

+
    +
  • Data Protection for Kubernetes using Cohesity with HPE Alletra (PDF)
  • +
  • Protect your containerized applications with HPE and Cohesity (Blog)
  • +
+

Cohesity solutions are available through HPE Complete.

+ +

Solution Overview Diagram

+



+

Environment and Preparations

+

The HPE CSI Driver has been validated on Cohesity DataProtect v7.0u1. +Check that the HPE CSI Driver and Cohesity software versions are compatible with the Kubernetes version being used.

+

This environment assumes the HPE CSI Driver for Kubernetes is deployed in the Kubernetes cluster, an Alletra storage backend has been configured, and a default StorageClass has been defined.

+

Review Cohesity's "Plan and Prepare" documentation to accomplish the following:

+
    +
  • Firewall considerations.
  • +
  • Kubernetes ServiceAccount with cluster-admin permissions.
  • +
  • Extract Bearer token ID from above ServiceAccount
  • +
  • Obtain Cohesity Datamover (download) and push to a local repository or public registry.
  • +
+
+

Note

+

Cohesity only supports the backup of user-created application Namespaces and does not support the backup of infrastructure Namespaces such as kube-system, etc.

+
+

Integrate Cohesity into Kubernetes

+

Review Cohesity's "Register and Manage Kubernetes Cluster" documentation to integrate Cohesity into your Kubernetes cluster. Below is an example screenshot of the Register Kubernetes Source dialog:

+


+

After the integration wizard is submitted, see the Post-Registration task documentation to verify Velero and datamover pod availability.

+
+

Note

+

The latest versions of Kubernetes, although present in the Cohesity support matrix, may still require an override from Cohesity support.

+
+

Configure Namespace-level Application Backup

+

A Namespace containing a WordPress application will be protected in this example. It contains a variety of Kubernetes resources and objects including:

+
    +
  • Configuration and Storage: PersistentVolumeClaim, ConfigMap, and Secret
  • +
  • Service and ServiceAccount
  • +
  • Workloads: Deployment, ReplicaSet and StatefulSet
  • +
+

Review the Protect Kubernetes Namespaces documentation from Cohesity. Create a new protection policy or use an available default policy. Additionally, see the Manage the Kubernetes Backup Configuration documentation to add/remove Namespaces to a protection group, adjust Auto Protect settings, modify the Protection Policy, and trigger an on-demand run.

+

See the screenshot below for an example backup Run details view.

+

+

Demo: Clone a Test/Development Environment by Restoring a Backup

+

Review the Cohesity documentation for Recover Kubernetes Cluster. Cohesity notes, at time of writing, that granular-level recovery of Namespace resource types is not supported. Consider the following when defining a recovery operation:

+
    +
  • Select a protection group or individual Namespace. If a protection group is chosen, multiple Namespace resources could be affected on recovery.
  • +
  • If any previously backed up objects exist in the destination, a restore operation will not overwrite them.
  • +
  • For applications deployed by Helm chart, recovery operations applied to new clusters or Namespaces will not be managed with Helm.
  • +
  • If an alternate Kubernetes cluster is chosen (New Location in the UI), be sure that the cluster has access to the same Kubernetes StorageClass as the backup’s source cluster.
  • +
+
+

Note

+

Protection groups and individual Namespace resources appear in the same list. Available Namespaces are denoted with the Kubernetes ship wheel icon.

+
+

For this example, a WordPress Namespace backup will be restored to the source Kubernetes cluster but under a new Namespace with a "debug-" prefix (see below). This application can run alongside and separately from the parent application.

+

+

After the recovery process is complete we can review and compare the associated objects between the two Namespaces. In particular, names are similar but discrete PersistentVolumes, IPs and Services exist for each Namespace.

+

$ diff <(kubectl get all,pvc -n wordpress-orig) <(kubectl get all,pvc -n debug-wordpress-orig)
+2,3c2,3
+- pod/wordpress-577cc47468-mbg2n   1/1     Running   0          171m
+- pod/wordpress-mariadb-0          1/1     Running   0          171m
+---
++ pod/wordpress-577cc47468-mbg2n   1/1     Running   0          57m
++ pod/wordpress-mariadb-0          1/1     Running   0          57m
+6,7c6,7
+- service/wordpress           LoadBalancer   10.98.47.101    <pending>     80:30657/TCP,443:30290/TCP   171m
+- service/wordpress-mariadb   ClusterIP      10.104.190.60   <none>        3306/TCP                     171m
+---
++ service/wordpress           LoadBalancer   10.109.247.83   <pending>     80:31425/TCP,443:31002/TCP   57m
++ service/wordpress-mariadb   ClusterIP      10.101.77.139   <none>        3306/TCP                     57m
+10c10
+- deployment.apps/wordpress   1/1     1            1           171m
+---
++ deployment.apps/wordpress   1/1     1            1           57m
+13c13
+- replicaset.apps/wordpress-577cc47468   1         1         1       171m
+---
++ replicaset.apps/wordpress-577cc47468   1         1         1       57m
+16c16
+- statefulset.apps/wordpress-mariadb   1/1     171m
+---
++ statefulset.apps/wordpress-mariadb   1/1     57m
+19,20c19,20
+- persistentvolumeclaim/data-wordpress-mariadb-0   Bound    pvc-4b3222c3-f71f-427f-847b-d6d0c5e019a4   8Gi        RWO            a9060-std      171m
+- persistentvolumeclaim/wordpress                  Bound    pvc-72158104-06ae-4547-9f80-d551abd7cda5   10Gi       RWO            a9060-std      171m
+---
++ persistentvolumeclaim/data-wordpress-mariadb-0   Bound    pvc-306164a8-3334-48ac-bdee-273ac9a97403   8Gi        RWO            a9060-std      59m
++ persistentvolumeclaim/wordpress                  Bound    pvc-17a55296-d0fb-44c2-968b-09c6ffc4abc9   10Gi       RWO            a9060-std      59m
+

+
+

Note

+

Above links are external to docs.cohesity.com and require a MyCohesity account.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/commvault/index.html b/partners/commvault/index.html new file mode 100644 index 00000000..585feb7a --- /dev/null +++ b/partners/commvault/index.html @@ -0,0 +1,320 @@ + + + + + + + + + + + + + + + + + + Commvault - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • Commvault
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

The Commvault intelligent data management platform provides Kubernetes-native protection, application mobility, and disaster recovery for containerized applications. Combined with Commvault Command Center™, Commvault provides enterprise IT operations and DevOps teams an easy-to-use, self-service dashboard for managing the protection of Kubernetes.

+

HPE and Commvault collaborate continuously to deliver assets relevant to our joint customers.

+
    +
  • Data protection for Kubernetes using Commvault Backup & Recovery, HPE Apollo Servers, and HPE CSI Driver for Kubernetes (PDF)
  • +
  • Data Protection for Kubernetes using Commvault Backup & Recovery with HPE Alletra (YouTube)
  • +
+

Learn more about HPE and Commvault's partnership here: https://www.commvault.com/supported-technologies/hpe.

+ +

Pre-requisites

+

The HPE CSI Driver has been validated on Commvault Complete Backup and Recovery 2022E. +Check that the HPE CSI Driver and Commvault software versions are compatible with the Kubernetes version being used.

+
Permissions
+

This guide assumes you have administrative access to Commvault Command Center and administrator access to a Kubernetes cluster with kubectl. Refer to the Creating a Service Account for Kubernetes Authentication documentation to define a serviceaccount and clusterrolebinding with cluster-admin permissions.

+
Cluster requirements
+

The cluster needs to be running Kubernetes 1.22 or later and have the CSI snapshot CustomResourceDefinitions (CRDs) and the CSI external snapshotter deployed. Follow the guides available on SCOD to:

+ +
+

Note

+

The rest of this guide assumes the default VolumeSnapshotClass and VolumeSnapshots are functional within the cluster with a compatible Kubernetes snapshot API level between the CSI driver and Commvault.

+
+

Configure Kubernetes protection

+

To configure data protection for Kubernetes, follow the official Commvault documentation and ensure the version matches the software version in your environment. +As a summary, complete the following:

+ +

Backup and Restores

+

To perform snapshot and restore operations through Commvault using the HPE CSI Driver for Kubernetes, please refer to the Commvault documentation.

+ +
+

Note

+

Above links are external to documentation.commvault.com.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/index.html b/partners/index.html new file mode 100644 index 00000000..eeecea24 --- /dev/null +++ b/partners/index.html @@ -0,0 +1,252 @@ + + + + + + + + + + + + + + + + + + Partner Ecosystems - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Partner Ecosystems
  • +
  • +
  • +
+
+
+
+
+ +

Partner Ecosystems

+ +
+

Tip

+

The HPE CSI Driver for Kubernetes will work on any CNCF certified Kubernetes distribution. Verify compute node OS and Kubernetes version in the Compatibility and Support table.

+
+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/partners/kasten/index.html b/partners/kasten/index.html new file mode 100644 index 00000000..2cb42a27 --- /dev/null +++ b/partners/kasten/index.html @@ -0,0 +1,315 @@ + + + + + + + + + + + + + + + + + + Kasten by Veeam - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • Kasten by Veeam
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

Kasten K10 by Veeam is a data management platform designed to run natively on Kubernetes to protect applications. K10 integrates seamlessly with the HPE CSI Driver for Kubernetes thanks to the native support for CSI VolumeSnapshots and VolumeSnapshotClasses.

+

HPE and Veeam have a long-standing alliance. Read about the extended partnership with Kasten in this blog post.

+
+

Tip

+

All the steps below are captured in a tutorial available on YouTube and in the SCOD Video Gallery.

+
+ +

Prerequisites

+

The cluster needs to be running Kubernetes 1.17 or later and have the CSI snapshot CustomResourceDefinitions (CRDs) and the CSI snapshot-controller deployed. Follow the guides available on SCOD to:

+ +
+

Note

+

The rest of this guide assumes a default VolumeSnapshotClass and VolumeSnapshots are functional on the cluster.

+
+

Annotate the VolumeSnapshotClass

+

In order to allow K10 to perform snapshots and restores using the VolumeSnapshotClass, it needs an annotation.

+

Assuming we have a default VolumeSnapshotClass named "hpe-snapshot":

+

kubectl annotate volumesnapshotclass hpe-snapshot k10.kasten.io/is-snapshot-class=true
+

+

Installing Kasten K10

+

Kasten K10 installs in its own namespace using a Helm chart. It also assumes there's a performant default StorageClass on the cluster to serve the various PersistentVolumeClaims needed for the controllers.

+ +
+

Note

+

Above links are external to docs.kasten.io.

+
+

Snapshots and restores

+

Kasten K10 provides the user with a graphical interface and dashboard to schedule and perform data management operations. There's also an API that can be manipulated with kubectl using CRDs.

+

To perform snapshot and restore operations through Kasten K10 using the HPE CSI Driver for Kubernetes, please refer to the Kasten K10 documentation.

+ +
+

Note

+

Above links are external to docs.kasten.io.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/mirantis/index.html b/partners/mirantis/index.html new file mode 100644 index 00000000..e02640c8 --- /dev/null +++ b/partners/mirantis/index.html @@ -0,0 +1,366 @@ + + + + + + + + + + + + + + + + + + Mirantis - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • Mirantis
  • +
  • +
  • +
+
+
+
+
+ +

Introduction

+

Mirantis Kubernetes Engine (MKE) is the successor of the Universal Control Plane part of Docker Enterprise Edition (Docker EE). The HPE CSI Driver for Kubernetes allows users to provision persistent storage for Kubernetes workloads running on MKE. See the note below on Docker Swarm for workloads deployed outside of Kubernetes.

+ +

Compatability Chart

+

Mirantis and HPE perform testing and qualification as needed for either release of MKE or the HPE CSI Driver. If there are any deviations in the installation procedures, those will be documented here.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MKE VersionHPE CSI DriverStatusInstallation Notes
3.72.4.0SupportedHelm chart notes
3.62.2.0SupportedHelm chart notes
3.4, 3.5-Untested-
3.32.0.0DeprecatedAdvanced Install notes for MKE 3.3
+
+

Seealso

+

Ensure to be understood with the limitations and the lack of Docker Swarm support.

+
+

Helm Chart Install

+

With MKE 3.6 and onwards, it's recommend to use the HPE CSI Driver for Kubernetes Helm chart. There are no known caveats or workarounds at this time.

+ +
+

Important

+

Always ensure the MKE version of the underlying Kubernetes version and worker node host OS conforms to the latest compatability and support table.

+
+

Mirantis Kubernetes Engine 3.3

+

At the time of release of MKE 3.3, neither of the HPE CSI Driver Helm chart or operator will install correctly.

+

Prerequisites

+

The MKE managers and workers needs to run a supported host OS as outlined in the particular version of the HPE CSI Driver found in the release tables. Also verify that the HPE CSI Driver support the version Kubernetes used by MKE (see below).

+

Steps to install

+

MKE admins needs to familiarize themselves with the advanced install method of the CSI driver. Before the installation begins, make sure an account with administrative privileges is being used to the deploy the driver. Also determine the actual Kubernetes version MKE is using.

+

kubectl version --short true
+Client Version: v1.19.4
+Server Version: v1.18.10-mirantis-1
+

+

In this particular example, Kubernetes 1.18 is being used. Follow the steps for 1.18 highlighted within the advanced install section of the deployment documentation.

+
    +
  • Step 1 → Install the Linux node IO settings ConfigMap.
  • +
  • Step 2 → Determine which backend being used (Nimble or Primera/3PAR) and deploy the corresponding CSP manifest.
  • +
  • Step 3 → Deploy the HPE CSI Driver manifests for the Kubernetes version being used.
  • +
+

Next, add a supported HPE backend and create a StorageClass.

+

Learn more about using the CSI objects in the comprehensive overview. Also make sure to familiarize yourself with the particular features and capabilities of the backend being used.

+ +

Docker Swarm

+

Provisioning Docker Volumes for Docker Swarm workloads from a HPE primary storage backend is deprecated.

+

Limitations

+
    +
  • HPE CSI Driver does not support Windows workers.
  • +
  • HPE CSI Driver NFS Server Provisioner is not supported on MKE.
  • +
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/rancher_labs/index.html b/partners/rancher_labs/index.html new file mode 100644 index 00000000..c6521e4b --- /dev/null +++ b/partners/rancher_labs/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml b/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml new file mode 100644 index 00000000..79adba0f --- /dev/null +++ b/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml @@ -0,0 +1,103 @@ +--- +kind: SecurityContextConstraints +apiVersion: security.openshift.io/v1 +metadata: + name: hpe-csi-controller-scc +allowHostDirVolumePlugin: true +allowHostIPC: true +allowHostNetwork: true +allowHostPID: true +allowHostPorts: true +readOnlyRootFilesystem: true +requiredDropCapabilities: [] +runAsUser: + type: RunAsAny +seLinuxContext: + type: RunAsAny +users: +- system:serviceaccount:hpe-storage:hpe-csi-controller-sa +volumes: +- hostPath +- emptyDir +- projected +--- +kind: SecurityContextConstraints +apiVersion: security.openshift.io/v1 +metadata: + name: hpe-csi-node-scc +allowHostDirVolumePlugin: true +allowHostIPC: true +allowHostNetwork: true +allowHostPID: true +allowHostPorts: true +allowPrivilegeEscalation: true +allowPrivilegedContainer: true +allowedCapabilities: +- SYS_ADMIN +defaultAddCapabilities: [] +fsGroup: + type: RunAsAny +groups: [] +priority: +readOnlyRootFilesystem: false +requiredDropCapabilities: [] +runAsUser: + type: RunAsAny +seLinuxContext: + type: RunAsAny +supplementalGroups: + type: RunAsAny +users: +- system:serviceaccount:hpe-storage:hpe-csi-node-sa +volumes: +- emptyDir +- hostPath +- projected +- configMap +--- +kind: SecurityContextConstraints +apiVersion: security.openshift.io/v1 +metadata: + name: hpe-csi-csp-scc +allowHostDirVolumePlugin: true +readOnlyRootFilesystem: false +runAsUser: + type: RunAsAny +seLinuxContext: + type: RunAsAny +#supplementalGroups: +# type: RunAsAny +users: +- system:serviceaccount:hpe-storage:hpe-csp-sa +volumes: +- hostPath +- emptyDir +- projected +--- +kind: SecurityContextConstraints +apiVersion: security.openshift.io/v1 +metadata: + name: hpe-csi-nfs-scc +allowPrivilegedContainer: true +allowPrivilegeEscalation: true +allowedCapabilities: +- SYS_ADMIN +- DAC_READ_SEARCH +defaultAddCapabilities: [] +fsGroup: + type: RunAsAny +groups: [] +readOnlyRootFilesystem: false +requiredDropCapabilities: [] +runAsUser: + type: RunAsAny +seLinuxContext: + type: RunAsAny +supplementalGroups: + type: RunAsAny +users: +- system:serviceaccount:hpe-nfs:hpe-csi-nfs-sa +volumes: +- persistentVolumeClaim +- configMap +- projected diff --git a/partners/redhat_openshift/img/redhat-certified.png b/partners/redhat_openshift/img/redhat-certified.png new file mode 100644 index 00000000..679ebe8c Binary files /dev/null and b/partners/redhat_openshift/img/redhat-certified.png differ diff --git a/partners/redhat_openshift/img/webcon-1.png b/partners/redhat_openshift/img/webcon-1.png new file mode 100644 index 00000000..2f86e790 Binary files /dev/null and b/partners/redhat_openshift/img/webcon-1.png differ diff --git a/partners/redhat_openshift/img/webcon-2.png b/partners/redhat_openshift/img/webcon-2.png new file mode 100644 index 00000000..06ebc51e Binary files /dev/null and b/partners/redhat_openshift/img/webcon-2.png differ diff --git a/partners/redhat_openshift/img/webcon-3-1.png b/partners/redhat_openshift/img/webcon-3-1.png new file mode 100644 index 00000000..e070ac67 Binary files /dev/null and b/partners/redhat_openshift/img/webcon-3-1.png differ diff --git a/partners/redhat_openshift/img/webcon-3.png b/partners/redhat_openshift/img/webcon-3.png new file mode 100644 index 00000000..4cadc3ab Binary files /dev/null and b/partners/redhat_openshift/img/webcon-3.png differ diff --git a/partners/redhat_openshift/img/webcon-4.png b/partners/redhat_openshift/img/webcon-4.png new file mode 100644 index 00000000..0c519f21 Binary files /dev/null and b/partners/redhat_openshift/img/webcon-4.png differ diff --git a/partners/redhat_openshift/img/webcon-5.png b/partners/redhat_openshift/img/webcon-5.png new file mode 100644 index 00000000..33d8cd22 Binary files /dev/null and b/partners/redhat_openshift/img/webcon-5.png differ diff --git a/partners/redhat_openshift/img/webcon-6.png b/partners/redhat_openshift/img/webcon-6.png new file mode 100644 index 00000000..343e685b Binary files /dev/null and b/partners/redhat_openshift/img/webcon-6.png differ diff --git a/partners/redhat_openshift/img/webcon-7.png b/partners/redhat_openshift/img/webcon-7.png new file mode 100644 index 00000000..5150d394 Binary files /dev/null and b/partners/redhat_openshift/img/webcon-7.png differ diff --git a/partners/redhat_openshift/index.html b/partners/redhat_openshift/index.html new file mode 100644 index 00000000..b56ff0f1 --- /dev/null +++ b/partners/redhat_openshift/index.html @@ -0,0 +1,779 @@ + + + + + + + + + + + + + + + + + + Red Hat OpenShift - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • Red Hat OpenShift
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

+HPE and Red Hat have a long standing partnership to provide jointly supported software, platform and services with the absolute best customer experience in the industry.

+

Red Hat OpenShift uses open source Kubernetes and various other components to deliver a PaaS experience that benefits both developers and operations. This packaged experience differs slightly on how you would deploy and use the HPE volume drivers and this page serves as the authoritative source for all things HPE primary storage and Red Hat OpenShift.

+ +

OpenShift 4

+

Software deployed on OpenShift 4 follows the Operator pattern. CSI drivers are no exception.

+

Certified combinations

+

Software delivered through the HPE and Red Hat partnership follows a rigorous certification process and only qualify what's listed as "Certified" in the below table.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StatusRed Hat OpenShiftHPE CSI OperatorContainer Storage Providers
Certified4.16 EUS22.5.1All
Certified4.152.4.1, 2.4.2, 2.5.1All
Certified4.14 EUS22.4.0, 2.4.1, 2.4.2, 2.5.1All
Certified4.132.4.0, 2.4.1, 2.4.2All
Certified4.12 EUS22.3.0, 2.4.0, 2.4.1, 2.4.2All
EOL14.112.3.0All
EOL14.10 EUS22.2.1, 2.3.0All
+

1 = End of life support per Red Hat OpenShift Life Cycle Policy.
+2 = Red Hat OpenShift Extended Update Support.

+ + +

Check the table above periodically for future releases.

+
+

Pointers

+
    +
  • Other combinations may work but will not be supported.
  • +
  • Both Red Hat Enterprise Linux and Red Hat CoreOS worker nodes are supported.
  • +
  • Instructions on this page only reflect the current stable version of the HPE CSI Operator and OpenShift.
  • +
  • OpenShift Virtualization OS images are only supported on PVCs using "RWX" with volumeMode: Block. See below for more details.
  • +
+
+

Security model

+

By default, OpenShift prevents containers from running as root. Containers are run using an arbitrarily assigned user ID. Due to these security restrictions, containers that run on Docker and Kubernetes might not run successfully on Red Hat OpenShift without modification.

+

Users deploying applications that require persistent storage (i.e. through the HPE CSI Driver) will need the appropriate permissions and Security Context Constraints (SCC) to be able to request and manage storage through OpenShift. Modifying container security to work with OpenShift is outside the scope of this document.

+

For more information on OpenShift security, see Managing security context constraints.

+
+

Note

+

If you run into issues writing to persistent volumes provisioned by the HPE CSI Driver under a restricted SCC, add the fsMode: "0770" parameter to the StorageClass with RWO claims or fsMode: "0777" for RWX claims.

+
+

Limitations

+

Since the CSI Operator only provides "Basic Install" capabilities. The following limitations apply:

+
    +
  • The ConfigMap "hpe-linux-config" that controls host configuration is immutable
  • +
  • The NFS Server Provisioner can not be used with Operators deploying PersistentVolumeClaims as part of the installation. See #295 on GitHub.
  • +
  • Deploying the NFS Server Provisioner to a Namespace other than "hpe-nfs" requires a separate SCC applied to the Namespace. See #nfs_server_provisioner_considerations.
  • +
+

Deployment

+

The HPE CSI Operator for Kubernetes needs to be installed through the interfaces provided by Red Hat. Do not follow the instructions found on OperatorHub.io.

+
+

Tip

+

There's a tutorial available on YouTube accessible through the Video Gallery on how to install and use the HPE CSI Operator on Red Hat OpenShift.

+
+

Upgrading

+

In situations where the operator needs to be upgraded, follow the prerequisite steps in the Helm chart on Artifact Hub.

+ +
+

Automatic Updates

+

Do not under any circumstance enable "Automatic Updates" for the HPE CSI Operator for Kubernetes

+
+

Once the steps have been followed for the particular version transition:

+
    +
  • Uninstall the HPECSIDriver instance
  • +
  • Delete the "hpecsidrivers.storage.hpe.com" CRD
    : + oc delete crd/hpecsidrivers.storage.hpe.com
  • +
  • Uninstall the HPE CSI Operator for Kubernetes
  • +
  • Proceed to installation through the OpenShift Web Console or OpenShift CLI
  • +
  • Reapply the SCC to ensure there hasn't been any changes.
  • +
+
+

Good to know

+

Deleting the HPECSIDriver instance and uninstalling the CSI Operator does not affect any running workloads, PersistentVolumeClaims, StorageClasses or other API resources created by the CSI Operator. In-flight operations and new requests will be retried once the new HPECSIDriver has been instantiated.

+
+

Prerequisites

+

The HPE CSI Driver needs to run in privileged mode and needs access to host ports, host network and should be able to mount hostPath volumes. Hence, before deploying HPE CSI Operator on OpenShift, please create the following SecurityContextConstraints (SCC) to allow the CSI driver to be running with these privileges.

+

oc new-project hpe-storage --display-name="HPE CSI Driver for Kubernetes"
+

+
+

Important

+

The rest of this implementation guide assumes the default "hpe-storage" Namespace. If a different Namespace is desired. Update the ServiceAccount Namespace in the SCC below.

+
+
+

Deploy or download the SCC:

+

oc apply -f https://scod.hpedev.io/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml
+securitycontextconstraints.security.openshift.io/hpe-csi-controller-scc created
+securitycontextconstraints.security.openshift.io/hpe-csi-node-scc created
+securitycontextconstraints.security.openshift.io/hpe-csi-csp-scc created
+securitycontextconstraints.security.openshift.io/hpe-csi-nfs-scc created
+

+

OpenShift web console

+

Once the SCC has been applied to the project, login to the OpenShift web console as kube:admin and navigate to Operators -> OperatorHub.

+

Search for HPE +Search for 'HPE CSI' in the search field and select the non-marketplace version.

+

Click Install +Click 'Install'.

+
+

Note

+

Latest supported HPE CSI Operator on OpenShift 4.14 is 2.4.2

+
+

Click Install +Select the Namespace where the SCC was applied, select 'Manual' Update Approval, click 'Install'.

+

Click Approve +Click 'Approve' to finalize installation of the Operator

+

Operator installed +The HPE CSI Operator is now installed, select 'View Operator'.

+

Create a new instance +Click 'Create Instance'.

+

Configure instance +Normally, no customizations are needed, scroll all the way down and click 'Create'.

+

By navigating to the Developer view, it should now be possible to inspect the CSI driver and Operator topology.

+

Operator Topology

+

The CSI driver is now ready for use. Next, an HPE storage backend needs to be added along with a StorageClass.

+

See Caveats below for information on creating StorageClasses in Red Hat OpenShift.

+

OpenShift CLI

+

This provides an example Operator deployment using oc. If you want to use the web console, proceed to the previous section.

+

It's assumed the SCC has been applied to the project and have kube:admin privileges. As an example, we'll deploy to the hpe-storage project as described in previous steps.

+

First, an OperatorGroup needs to be created.

+

apiVersion: operators.coreos.com/v1
+kind: OperatorGroup
+metadata:
+  name: hpe-csi-driver-for-kubernetes
+  namespace: hpe-storage
+spec:
+  targetNamespaces:
+  - hpe-storage
+

+

Next, create a Subscription to the Operator.

+

apiVersion: operators.coreos.com/v1alpha1
+kind: Subscription
+metadata:
+  name: hpe-csi-operator
+  namespace: hpe-storage
+spec:
+  channel: stable
+  installPlanApproval: Manual
+  name: hpe-csi-operator
+  source: certified-operators
+  sourceNamespace: openshift-marketplace
+

+

Next, approve the installation.

+

oc -n hpe-storage patch $(oc get installplans -n hpe-storage -o name) -p '{"spec":{"approved":true}}' --type merge
+

+

The Operator will now be installed on the OpenShift cluster. Before instantiating a CSI driver, watch the roll-out of the Operator.

+

oc rollout status deploy/hpe-csi-driver-operator -n hpe-storage
+Waiting for deployment "hpe-csi-driver-operator" rollout to finish: 0 of 1 updated replicas are available...
+deployment "hpe-csi-driver-operator" successfully rolled out
+

+

The next step is to create a HPECSIDriver object.

+
# oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+  name: hpecsidriver-sample
+spec:
+  # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+  controller:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    resources:
+      limits:
+        cpu: 2000m
+        memory: 1Gi
+      requests:
+        cpu: 100m
+        memory: 128Mi
+    tolerations: []
+  csp:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    resources:
+      limits:
+        cpu: 2000m
+        memory: 1Gi
+      requests:
+        cpu: 100m
+        memory: 128Mi
+    tolerations: []
+  disable:
+    alletra6000: false
+    alletra9000: false
+    alletraStorageMP: false
+    nimble: false
+    primera: false
+  disableHostDeletion: false
+  disableNodeConfiguration: false
+  disableNodeConformance: false
+  disableNodeGetVolumeStats: false
+  disableNodeMonitor: false
+  imagePullPolicy: IfNotPresent
+  images:
+    csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
+    csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0
+    csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7
+    csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0
+    csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
+    csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
+    csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
+    csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
+    csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6
+    csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6
+    csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6
+    nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5
+    nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0
+    primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0
+  iscsi:
+    chapSecretName: ""
+  kubeletRootDir: /var/lib/kubelet
+  logLevel: info
+  node:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    resources:
+      limits:
+        cpu: 2000m
+        memory: 1Gi
+      requests:
+        cpu: 100m
+        memory: 128Mi
+    tolerations: []
+
# oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+  name: hpecsidriver-sample
+spec:
+  # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+  controller:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  csp:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  disable:
+    alletra6000: false
+    alletra9000: false
+    alletraStorageMP: false
+    nimble: false
+    primera: false
+  disableNodeConfiguration: false
+  disableNodeConformance: false
+  disableNodeGetVolumeStats: false
+  imagePullPolicy: IfNotPresent
+  iscsi:
+    chapPassword: ""
+    chapUser: ""
+  kubeletRootDir: /var/lib/kubelet/
+  logLevel: info
+  node:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  registry: quay.io
+
+
+
# oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml
+apiVersion: storage.hpe.com/v1
+kind: HPECSIDriver
+metadata:
+  name: hpecsidriver-sample
+spec:
+  # Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
+  controller:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  csp:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  disable:
+    alletra6000: false
+    alletra9000: false
+    alletraStorageMP: false
+    nimble: false
+    primera: false
+  disableNodeConfiguration: false
+  disableNodeConformance: false
+  disableNodeGetVolumeStats: false
+  imagePullPolicy: IfNotPresent
+  iscsi:
+    chapPassword: ""
+    chapUser: ""
+  kubeletRootDir: /var/lib/kubelet/
+  logLevel: info
+  node:
+    affinity: {}
+    labels: {}
+    nodeSelector: {}
+    tolerations: []
+  registry: quay.io
+
+
+
+

The CSI driver is now ready for use. Next, an HPE storage backend needs to be added along with a StorageClass.

+

Additional information

+

At this point the CSI driver is managed like any other Operator on Kubernetes and the life-cycle management capabilities may be explored further in the official Red Hat OpenShift documentation.

+

Uninstall the HPE CSI Operator

+

When uninstalling an operator managed by OLM, a Cluster Admin must decide whether or not to remove the CustomResourceDefinitions (CRD), APIServices, and resources related to these types owned by the operator. By design, when OLM uninstalls an operator it does not remove any of the operator’s owned CRDs, APIServices, or CRs in order to prevent data loss.

+
+

Important

+

Do not modify or remove these CRDs or APIServices if you are upgrading or reinstalling the HPE CSI driver in order to prevent data loss.

+
+

The following are CRDs installed by the HPE CSI driver.

+

hpecsidrivers.storage.hpe.com
+hpenodeinfos.storage.hpe.com
+hpereplicationdeviceinfos.storage.hpe.com
+hpesnapshotgroupinfos.storage.hpe.com
+hpevolumegroupinfos.storage.hpe.com
+hpevolumeinfos.storage.hpe.com
+snapshotgroupclasses.storage.hpe.com
+snapshotgroupcontents.storage.hpe.com
+snapshotgroups.storage.hpe.com
+volumegroupclasses.storage.hpe.com
+volumegroupcontents.storage.hpe.com
+volumegroups.storage.hpe.com
+

+

The following are APIServices installed by the HPE CSI driver.

+

v1.storage.hpe.com
+v2.storage.hpe.com
+

+

Please refer to the OLM Lifecycle Manager documentation on how to safely Uninstall your operator.

+

NFS Server Provisioner Considerations

+

When deploying NFS servers on OpenShift there's currently two things to keep in mind for a successful deployment. Also, be understood with the Limitations and Considerations for the NFS Server Provisioner in general.

+

Non-standard hpe-nfs Namespace

+

If NFS servers are deployed in a different Namespace than the default "hpe-nfs" by using the "nfsNamespace" StorageClass parameter, the "hpe-csi-nfs-scc" SCC needs to be updated to include the Namespace ServiceAccount.

+

This example adds "my-namespace" NFS server ServiceAccount to the SCC:

+

oc patch scc hpe-csi-nfs-scc --type=json -p='[{"op": "add", "path": "/users/-", "value": "system:serviceaccount:my-namespace:hpe-csi-nfs-sa" }]'
+

+

Operators Requesting NFS Persistent Volume Claims

+

Object references in OpenShift are not compatible with the NFS Server Provisioner. If a user deploys an Operator of any kind that creates a NFS server backed PVC, the operation will fail. Instead, pre-provision the PVC manually for the Operator instance to use.

+

Use the ext4 filesystem for NFS servers

+

On certain versions of OpenShift the NFS clients may experience stale NFS file handles like the one below when the NFS server is being restarted.

+

Error: failed to resolve symlink "/var/lib/kubelet/pods/290ff9e1-cc1e-4d05-b884-0ddcc05a9631/volumes/kubernetes.io~csi/pvc-321cf523-c063-4ce4-97e8-bc1365b8a05b/mount": lstat /var/lib/kubelet/pods/290ff9e1-cc1e-4d05-b884-0ddcc05a9631/volumes/kubernetes.io~csi/pvc-321cf523-c063-4ce4-97e8-bc1365b8a05b/mount: stale NFS file handle
+

+

If this problem occurs, use the ext4 filesystem on the backing volumes. The fsType is set in the StorageClass. Example:

+

...
+parameters:
+  csi.storage.k8s.io/fstype: ext4
+...
+

+

StorageProfile for OpenShift Virtualization Source PVCs

+

If OpenShift Virtualization is being used and Live Migration is desired for virtual machines PVCs cloned from the "openshift-virtualization-os-images" Namespace, the StorageProfile needs to be updated to "ReadWriteMany".

+
+

Info

+

These steps are not necessary on recent OpenShift EUS (v4.12.11 onwards) releases as the default StorageProfile for "csi.hpe.com" has been corrected upstream.

+
+

If the default StorageClass is named "hpe-standard", issue the following command:

+

oc edit -n openshift-cnv storageprofile hpe-standard
+

+

Replace the spec: {} with the following:

+

spec:
+  claimPropertySets:
+  - accessModes:
+    - ReadWriteMany
+    volumeMode: Block
+

+

Ensure there are no errors. Recreate the OS images:

+

oc delete pvc -n openshift-virtualization-os-images --all
+

+

Inspect the PVCs and ensure they are re-created with "RWX":

+

oc get pvc -n openshift-virtualization-os-images -w
+

+
+

Hint

+

The "accessMode" transformation for block volumes from RWO PVC to RWX clone has been resolved in HPE CSI Driver v2.5.0. Regardless, using source RWX PVs will simplify the workflows for users.

+
+

Live VM migrations for Alletra Storage MP

+

With HPE CSI Operator for Kubernetes v2.4.2 and older there's an issue that prevents live migrations of VMs that has PVCs attached that has been clones from an OS image residing on Alletra Storage MP backends including 3PAR, Primera and Alletra 9000.

+

Identify the PVC that that has been cloned from an OS image. The VM name is "centos7-silver-bedbug-14" in this case.

+

oc get vm/centos7-silver-bedbug-14 -o jsonpath='{.spec.template.spec.volumes}' | jq
+

+

In this instance, the dataVolume is the same name as the VM. Grab the PV name from the PVC name.

+

MY_PV_NAME=$(oc get pvc/centos7-silver-bedbug-14 -o jsonpath='{.spec.volumeName}')
+

+

Next, patch the hpevolumeinfo CRD.

+

oc patch hpevolumeinfo/${MY_PV_NAME} --type=merge --patch '{"spec": {"record": {"MultiInitiator": "true"}}}'
+

+

The VM is now ready to be migrated.

+
+

Hint

+

If there are multiple dataVolumes, each one needs to be patched.

+
+

Unsupported Version of the Operator Install

+

In the event on older version of the Operator needs to be installed, the bundle can be installed directly by installing the Operator SDK. Make sure a recent version of the operator-sdk binary is available and that no HPE CSI Driver is currently installed on the cluster.

+

Install a specific version prior and including v2.4.2:

+

operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle:v2.4.2
+

+

Install a specific version after and including v2.5.0:

+

operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle-ocp:v2.5.0
+

+
+

Important

+

Once the Operator is installed, a HPECSIDriver instance needs to be created. Follow the steps using the web console or the CLI to create an instance.

+
+

When the unsupported install isn't needed any longer, run:

+

operator-sdk cleanup -n hpe-storage hpe-csi-operator
+

+

Unsupported Helm Chart Install

+

In the event Red Hat releases a new version of OpenShift between HPE CSI Driver releases or if interest arises to run the HPE CSI Driver on an uncertified version of OpenShift, it's possible to install the CSI driver using the Helm chart instead.

+

It's not recommended to install the Helm chart unless it's listed as "Field Tested" in the support matrix above.

+
+

Tip

+

Helm chart install is also only current method to use beta releases of the HPE CSI Driver.

+
+

Steps to install.

+
    +
  • Follow the steps in the prerequisites to apply the SCC in the Namespace (Project) you wish to install the driver.
  • +
  • Install the Helm chart with the steps provided on ArtifactHub. Pay attention to which version combination has been field tested.
  • +
+
+

Unsupported

+

Understand that this method is not supported by Red Hat and not recommended for production workloads or clusters.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/suse_harvester/img/support.png b/partners/suse_harvester/img/support.png new file mode 100644 index 00000000..11fe1a97 Binary files /dev/null and b/partners/suse_harvester/img/support.png differ diff --git a/partners/suse_harvester/index.html b/partners/suse_harvester/index.html new file mode 100644 index 00000000..1e0db3f4 --- /dev/null +++ b/partners/suse_harvester/index.html @@ -0,0 +1,338 @@ + + + + + + + + + + + + + + + + + + SUSE Harvester - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • SUSE Harvester
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

"Harvester is a modern hyperconverged infrastructure (HCI) solution built for bare metal servers using enterprise-grade open-source technologies including Linux, KVM, Kubernetes, KubeVirt, and Longhorn. Designed for users looking for a flexible and affordable solution to run cloud-native and virtual machine (VM) workloads in your datacenter and at the edge, Harvester provides a single pane of glass for virtualization and cloud-native workload management."1

+
1 = quote from HarvesterHCI.io.
+


+

HPE supports the underlying host OS, SLE Micro, using the HPE CSI Driver for Kubernetes and the Rancher Kubernetes Engine 2 (RKE2) which is a CNCF certified Kubernetes distribution. Harvester embeds KubeVirt and uses standard CSI storage contructs to manage storage resoruces for virtual machines.

+ +

Deployment Considerations

+

Many of the features provided by Harvester stem from the capabilities of KubeVirt. The HPE CSI Driver for Kubernetes provides "ReadWriteMany" block storage which allows seamless migration of VMs between hosts with disks attached. The NFS Server Provisioner may be used by disparate VMs that needs "ReadWriteMany" to share data.

+

Limitations

+

These limitatons are framed around the integration of the HPE CSI Driver for Kubernetes and Harvester. Other limitations may apply.

+

Boot from Longhorn

+

Since Harvester is a hyper-converged infrastructure platform in its own right, the storage components are already embedded in the platform using Longhorn. Longhorn is designed to run from local server storage and today it's not practical to replace Longhorn with CSI capable storage from HPE. The Harvester servers may use boot from SAN and other means in terms of external storage to provide capacity to Longhorn but Longhorn would still be used to create VM images and machines.

+

Storage provided by platforms supported by the HPE CSI Driver for Kubernetes is complementary and non-boot disks may be easily provisioned and attached to VM workloads.

+
+

Info

+

The VM boot limitation is solely implemented by Harvester in front of KubeVirt. Any other KubeVirt platform would allow booting from storage resources provided by HPE CSI Driver for Kubernetes.

+
+

iSCSI Networking

+

As per best practice HPE recommends using dedicated iSCSI networks for data traffic between the Harvester nodes and the storage platform.

+

Ancillary network configuration of Harvester nodes is managed as a post-install step. Creating network configuration files for Harvester nodes is beyond the scope of this document. Follow the guides provided by Harvester.

+ +

Example iSCSI Configuration

+

In a typical setup the IP addresses are assigned by DHCP on the NIC directly without any bridges, VLANs or bonds. The updates that needs to be done to /oem/90_custom.yaml on each compute node to reflect this configuration are described below.

+

Insert the block after the management interface configuration and replace the interface names ens224 and ens256 with the actual interface names on your compute nodes. List the available interfaces on the compute node prompt with ip link.

+

            ...
+            - path: /etc/sysconfig/network/ifcfg-ens224
+              permissions: 384
+              owner: 0
+              group: 0
+              content: |
+                STARTMODE='onboot'
+                BOOTPROTO='dhcp'
+                DHCLIENT_SET_DEFAULT_ROUTE='no'
+              encoding: ""
+              ownerstring: ""
+            - path: /etc/sysconfig/network/ifcfg-ens256
+              permissions: 384
+              owner: 0
+              group: 0
+              content: |
+                STARTMODE='onboot'
+                BOOTPROTO='dhcp'
+                DHCLIENT_SET_DEFAULT_ROUTE='no'
+              encoding: ""
+              ownerstring: ""
+              ...
+

+

Reboot the node and verify that IP addresses have been assigned to the NICs by running ip addr show dev <interface name> on the compute node prompt.

+

Installing HPE CSI Driver for Kubernetes

+

The HPE CSI Driver for Kubernetes is installed on Harvester by using the standard procedures for installing the CSI driver with Helm. Helm require access to the Harvester cluster through the Kubernetes API. You can download the Harvester cluster KubeConfig file by visiting the dashboard on your cluster and click "support" in the lower left corner of the UI.

+

+
+

Note

+

It does not matter if Harvester is managed by Rancher or running standalone. If the cluster is managed by Rancher, then go to the Virtualization Management dashboard and select "Download KubeConfig" in the dotted context menu of the cluster.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/suse_rancher/img/cluster_explorer.png b/partners/suse_rancher/img/cluster_explorer.png new file mode 100644 index 00000000..cc1aca2c Binary files /dev/null and b/partners/suse_rancher/img/cluster_explorer.png differ diff --git a/partners/suse_rancher/img/cluster_manager.png b/partners/suse_rancher/img/cluster_manager.png new file mode 100644 index 00000000..508c2862 Binary files /dev/null and b/partners/suse_rancher/img/cluster_manager.png differ diff --git a/partners/suse_rancher/img/new_cluster_manager.png b/partners/suse_rancher/img/new_cluster_manager.png new file mode 100644 index 00000000..13428102 Binary files /dev/null and b/partners/suse_rancher/img/new_cluster_manager.png differ diff --git a/partners/suse_rancher/index.html b/partners/suse_rancher/index.html new file mode 100644 index 00000000..1b1ea99e --- /dev/null +++ b/partners/suse_rancher/index.html @@ -0,0 +1,334 @@ + + + + + + + + + + + + + + + + + + SUSE Rancher - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • SUSE Rancher
  • +
  • +
  • +
+
+
+
+
+ +

Overview

+

SUSE Rancher provides a platform to deploy Kubernetes-as-a-service everywhere. HPE partners with SUSE Rancher to provide effortless management of the CSI driver on managed Kubernetes clusters. This allows our joint customers and channel partners to enable hybrid cloud stateful workloads on Kubernetes.

+ +

Deployment considerations

+

Rancher is capable of managing Kubernetes across a broad spectrum of managed and BYO clusters. It's important to understand that the HPE CSI Driver for Kubernetes does not support the same amount of combinations Rancher does. Consult the support matrix on the CSI driver overview page for the supported combinations of the HPE CSI Driver, Kubernetes and supported node operating systems.

+

Supported versions

+

Rancher uses Helm to deploy and manage partner software. The concept of a Helm repository in Rancher is organized under "Apps" in the Rancher UI. The HPE CSI Driver for Kubernetes is a partner solution present in the official Partner repository.

+ + + + + + + + + + + + + + + + + + + + +
Rancher releaseInstall methodsRecommended CSI driver
2.7Cluster Manager App Chartlatest
2.8Cluster Manager App Chartlatest
+
+

Tip

+

Learn more about Helm Charts and Apps in the Rancher documentation

+
+

HPE CSI Driver for Kubernetes

+

The HPE CSI Driver is part of the official Partner repository in Rancher. The CSI driver is deployed on managed Kubernetes clusters like any ordinary "App" in Rancher.

+
+

Note

+

In Rancher 2.5 an "Apps & Marketplace" component was introduced in the new "Cluster Explorer" interface. This is the new interface moving forward. Upcoming releases of the HPE CSI Driver for Kubernetes will only support installation via "Apps & Marketplace".

+
+

Rancher Cluster Manager (2.6 and newer)

+

Navigate to "Apps" and select "Charts", search for "HPE".

+

+Rancher Cluster Explorer

+

Post install steps

+

For Rancher workloads to make use of persistent storage from HPE, a supported backend needs to be configured with a Secret along with a StorageClass. These procedures are generic regardless of Kubernetes distribution and install method being used.

+ +

Ancillary HPE Storage Apps

+

Introduced in Rancher v2.7 and HPE CSI Driver for Kubernetes v2.3.0 is the ability to deploy the HPE Storage Array Exporter for Prometheus and HPE CSI Info Metrics Provider for Prometheus directly from the same Rancher Apps interface. These Helm charts have been enhanced to include support for Rancher Monitoring.

+
+

Tip

+

Make sure to tick "Enable ServiceMonitor" in the "ServiceMonitor settings" when configuring the ancillary Prometheus apps to work with Rancher Monitoring.

+
+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/tkgi/img/vm-pks.png b/partners/tkgi/img/vm-pks.png new file mode 100644 index 00000000..9d1a0297 Binary files /dev/null and b/partners/tkgi/img/vm-pks.png differ diff --git a/partners/tkgi/index.html b/partners/tkgi/index.html new file mode 100644 index 00000000..3e70f9ef --- /dev/null +++ b/partners/tkgi/index.html @@ -0,0 +1,287 @@ + + + + + + + + + + + + + + + + + + Tanzu Kubernetes Grid Integrated - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • Tanzu Kubernetes Grid Integrated
  • +
  • +
  • +
+
+
+
+
+ +

+

Overview

+

VMware Tanzu Kubernetes Grid Integrated Engine (TKGI) is supported by the HPE CSI Driver for Kubernetes.

+

Partnership

+

VMware and HPE have a long standing partnership across each of the product portfolios. Allowing TKGI users to access persistent storage with the HPE CSI Driver accelerates stateful workload performance, scalability and efficiency.

+

Learn more about the partnership and enablement on the VMware Marketplace.

+

Prerequisites

+

It's important to verify that the host OS and Kubernetes version is supported by the HPE CSI Driver.

+
    +
  • Only iSCSI is supported (learn why)
  • +
  • Ensure "Enable Privileged Containers" is ticked in the TKGI cluster deployment plan
  • +
  • Verify versions in the Compatibility and Support table
  • +
+

Installation

+

It's highly recommended to use the Helm chart to install the CSI driver as it's required to apply different "kubeletRootDir" than the default for the driver to start and work properly.

+

Example workflow.

+

helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/
+kubectl create ns hpe-storage
+helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage \
+ --set kubeletRootDir=/var/vcap/data/kubelet
+

+
+

Seealso

+

Learn more about the supported parameters of the Helm chart on ArtifactHub.

+
+

Post Install Steps

+

For TKGI workloads to make use of persistent storage from HPE, a supported backend needs to be configured along with a StorageClass. These procedures are generic regardless of Kubernetes distribution being used.

+ + +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/vmware/img/container_volumes.png b/partners/vmware/img/container_volumes.png new file mode 100644 index 00000000..2e3d4142 Binary files /dev/null and b/partners/vmware/img/container_volumes.png differ diff --git a/partners/vmware/img/container_volumes2.png b/partners/vmware/img/container_volumes2.png new file mode 100644 index 00000000..f4933cd0 Binary files /dev/null and b/partners/vmware/img/container_volumes2.png differ diff --git a/partners/vmware/img/cv_orig.png b/partners/vmware/img/cv_orig.png new file mode 100644 index 00000000..9321c2ee Binary files /dev/null and b/partners/vmware/img/cv_orig.png differ diff --git a/partners/vmware/img/profile1.png b/partners/vmware/img/profile1.png new file mode 100644 index 00000000..7eebfb05 Binary files /dev/null and b/partners/vmware/img/profile1.png differ diff --git a/partners/vmware/img/profile2.png b/partners/vmware/img/profile2.png new file mode 100644 index 00000000..d97622db Binary files /dev/null and b/partners/vmware/img/profile2.png differ diff --git a/partners/vmware/img/profile3.png b/partners/vmware/img/profile3.png new file mode 100644 index 00000000..dd72de37 Binary files /dev/null and b/partners/vmware/img/profile3.png differ diff --git a/partners/vmware/img/profile4.png b/partners/vmware/img/profile4.png new file mode 100644 index 00000000..6eacfd0b Binary files /dev/null and b/partners/vmware/img/profile4.png differ diff --git a/partners/vmware/img/profile5.png b/partners/vmware/img/profile5.png new file mode 100644 index 00000000..77d5d0a7 Binary files /dev/null and b/partners/vmware/img/profile5.png differ diff --git a/partners/vmware/img/profile5_example.png b/partners/vmware/img/profile5_example.png new file mode 100644 index 00000000..cd08a92b Binary files /dev/null and b/partners/vmware/img/profile5_example.png differ diff --git a/partners/vmware/img/profile6.png b/partners/vmware/img/profile6.png new file mode 100644 index 00000000..d10debf7 Binary files /dev/null and b/partners/vmware/img/profile6.png differ diff --git a/partners/vmware/img/profile7.png b/partners/vmware/img/profile7.png new file mode 100644 index 00000000..d083c5cf Binary files /dev/null and b/partners/vmware/img/profile7.png differ diff --git a/partners/vmware/index.html b/partners/vmware/index.html new file mode 100644 index 00000000..54d5dc21 --- /dev/null +++ b/partners/vmware/index.html @@ -0,0 +1,447 @@ + + + + + + + + + + + + + + + + + + VMware - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • PARTNER ECOSYSTEMS »
  • +
  • VMware
  • +
  • +
  • +
+
+
+
+
+ +

VMware vSphere Container Storage Plug-in

+

VMware vSphere Container Storage Plug-in also known as the upstream vSphere CSI Driver exposes vSphere storage and features to Kubernetes users and was introduced in vSphere 6.7 U3. The term Cloud Native Storage (CNS) is the vCenter abstraction point and is made up of two parts, a Container Storage Interface (CSI) driver for Kubernetes used to provision storage on vSphere and the CNS Control Plane within vCenter allowing visibility to persistent volumes through the CNS UI within vCenter.

+

CNS fully supports Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Profiles. This storage profile can have multiple array capabilities and data services, depending on the underlying storage you use. HPE primary storage (HPE GreenLake for Block Storage, Primera, Nimble Storage, Nimble Storage dHCI, and 3PAR) has the largest user base of vVols in the market, due to its simplicity to deploy and ease of use.

+ +

Feature Comparison

+

Volume parameters available to the vSphere Container Storage Plug-in will be dependent upon options exposed through the vSphere SPBM and may not include all volume features available. Please refer to the HPE Primera: VMware ESXi Implementation Guide (includes HPE Alletra Storage MP, Alletra 9000 and 3PAR) or VMware vSphere Virtual Volumes on HPE Nimble Storage Implementation Guide (includes HPE Alletra 5000/6000 and dHCI) for list of available features.

+

For a list of available volume parameters in the HPE CSI Driver for Kubernetes, refer to the respective CSP.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureHPE CSI DrivervSphere Container Storage Plug-in
vCenter Cloud Native Storage (CNS) UI SupportNoGA
Dynamic Block PV Provisioning (ReadWriteOnce access mode)GAGA (vVOL)
Dynamic File Provisioning (ReadWriteMany access mode)GAGA (vSan Only)
Volume Snapshots (CSI)GAGA (vSphere 7.0u3)
Volume Cloning from VolumeSnapshot (CSI)GAGA
Volume Cloning from PVC (CSI)GAGA
Volume Expansion (CSI)GAGA (vSphere 7.0u2)
RWO Raw Block Volume (CSI)GAGA
RWX/ROX Raw Block Volume (CSI)GANo
Generic Ephemeral Volumes (CSI)GAGA
Inline Ephemeral Volumes (CSI)GANo
Topology (CSI)NoGA
Volume Health (CSI)NoGA (vSan only)
CSI Controller multiple replica supportNoGA
Windows supportNoGA
Volume EncryptionGAGA (via VMcrypt)
Volume Mutator1GANo
Volume Groups1GANo
Snapshot Groups1GANo
Peer Persistence Replication3GANo4
+

+ 1 = Feature comparison based upon HPE CSI Driver for Kubernetes 2.4.0 and the vSphere Container Storage Plug-in 3.1.2
+ 2 = HPE and VMware fully support features listed as GA for their respective CSI drivers.
+ 3 = The HPE Remote Copy Peer Persistence feature of the HPE CSI Driver for Kubernetes is only available with HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR storage systems.
+ 4 = Peer Persistence is an HPE Storage specific platform feature that isn't abstracted up to the vSphere Container Storage Plug-in. Peer Persistence works with the vSphere Container Storage Plug-in when using VMFS datastores. +

+

Please refer to Compatibility Matrices for vSphere Container Storage Plug-in for the most up-to-date information.

+

Important Considerations

+

The HPE CSI Driver for Kubernetes is only supported on specific versions of worker node operating systems and Kubernetes versions, these requirements applies to any worker VM running on vSphere.

+

Some Kubernetes distributions, when running on vSphere may only support the vSphere Container Storage Plug-in, such an example is VMware Tanzu. Ensure the Kubernetes distribution being used support 3rd party CSI drivers (such as the HPE CSI Driver) and fulfill the requirements in Features and Capabilities before deciding which CSI driver to use.

+

HPE does not test or qualify the vSphere Container Storage Plug-in for any particular storage backend besides point solutions1. As long as the storage platform is supported by vSphere, VMware will support the vSphere Container Storage Plug-in.

+
+

VMware vSphere with Tanzu and HPE Alletra dHCI1

+

HPE provides a turnkey solution for Kubernetes using VMware Tanzu and HPE Alletra dHCI. Learn more.

+
+

Deployment

+

When considering to use block storage within Kubernetes clusters running on VMware, customers need to evaluate which data protocol (FC or iSCSI) is primarily used within their virtualized environment. This will help best determine which CSI driver can be deployed within your Kubernetes clusters.

+
+

Important

+

Due to limitations when exposing physical hardware (i.e. Fibre Channel Host Bus Adapters) to virtualized guest OSs and if iSCSI is not an available, HPE recommends the use of the VMware vSphere Container Storage Plug-in to deliver block-based persistent storage from HPE GreenLake for Block Storage, Alletra, Primera, Nimble Storage, Nimble Storage dHCI or 3PAR arrays to Kubernetes clusters within VMware environments for customers who are using the Fibre Channel protocol.

+

The HPE CSI Driver for Kubernetes does not support N_Port ID Virtualization (NPIV).

+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
ProtocolHPE CSI Driver for KubernetesvSphere Container Storage Plug-in
FCNot supportedSupported*
NVMe-oFNot supportedSupported*
iSCSISupportedSupported*
+

* = Limited to the SPBM implementation of the underlying storage array.

+

Learn how to deploy the vSphere Container Storage Plug-in:

+ +

1 = The HPE authored deployment guide for vSphere Container Storage Plug-in 2.4 has been preserved here.

+
+

Tip

+

Most non-vanilla Kubernetes distributions when deployed on vSphere manage and support the vSphere Container Storage Plug-in directly. That includes Red Hat OpenShift, SUSE Rancher, Charmed Kubernetes (Canonical), Google Anthos and Amazon EKS Anywhere.

+
+

Support

+

VMware provides enterprise grade support for the vSphere Container Storage Plug-in. Please use VMware Support Services to file a customer support ticket to engage the VMware global support team.

+

For support information on the HPE CSI Driver for Kubernetes, visit Support. For support with other HPE related technologies, visit the Hewlett Packard Enterprise Support Center.

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + diff --git a/partners/vmware/legacy.html b/partners/vmware/legacy.html new file mode 100644 index 00000000..a5a1c44c --- /dev/null +++ b/partners/vmware/legacy.html @@ -0,0 +1,758 @@ + + + + + + + + + + + + + + + + + + Deprecated - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Deprecated
  • +
  • +
  • +
+
+
+
+
+ +

Deprecated

+

This deployment guide is deprecated. Learn more here.

+

Cloud Native Storage for vSphere

+

Cloud Native Storage (CNS) for vSphere exposes vSphere storage and features to Kubernetes users and was introduced in vSphere 6.7 U3. CNS is made up of two parts, a Container Storage Interface (CSI) driver for Kubernetes used to provision storage on vSphere and the CNS Control Plane within vCenter allowing visibility to persistent volumes through the new CNS UI within vCenter.

+

CNS fully supports Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Profiles. This storage profile can have multiple array capabilities and data services, depending on the underlying storage you use. HPE primary storage (HPE Primera, Nimble Storage, Nimble Storage dHCI, and 3PAR) has the largest user base of vVols in the market, due to its simplicity to deploy and ease of use.

+
+

Tip

+

Check out the tutorial available on YouTube in the Video Gallery on how to configure and use HPE storage with Cloud Native Storage for vSphere.

+

Watch the video in its entirety or skip to configuring Tanzu with HPE storage or configuring the vSphere CSI Driver with HPE storage.

+
+ +

Feature Comparison

+

Volume parameters available to the vSphere CSI Driver will be dependent upon options exposed through the vSphere SPBM and may not include all volume features available. Please refer to the HPE Primera: VMware ESXi Implementation Guide or VMware vSphere Virtual Volumes on HPE Nimble Storage Implementation Guide for list of available features.

For a list of available volume parameters in the HPE CSI Driver for Kubernetes, refer to the respective CSP.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureHPE CSI DrivervSphere CSI Driver
vCenter Cloud Native Storage (CNS) UI SupportNoGA
Dynamic Block PV Provisioning (ReadWriteOnce access mode)GAGA (vVOL)
Dynamic File Provisioning (ReadWriteMany access mode)GAGA (vSan Only)
Volume Snapshots (CSI)GAAlpha (2.4.0)
Volume Cloning from VolumeSnapshot (CSI)GANo
Volume Cloning from PVC (CSI)GANo
Volume Expansion (CSI)GAGA (offline only)
Raw Block Volume (CSI)GAAlpha
Generic Ephemeral Volumes (CSI)GAGA
Inline Ephemeral Volumes (CSI)GANo
Topology (CSI)NoGA
Volume Health (CSI)NoGA (vSan only)
CSI Controller multiple replica supportNoGA
Volume EncryptionGAGA (via VMcrypt)
Volume Mutator1GANo
Volume Groups1GANo
Snapshot Groups1GANo
Peer Persistence Replication3GANo4
+

+ 1 = Feature comparison based upon HPE CSI Driver for Kubernetes v2.1.1 and the vSphere CSI Driver v2.4.1
+ 2 = HPE and VMware fully support features listed as GA for their respective CSI drivers.
+ 3 = The HPE Remote Copy Peer Persistence feature of the HPE CSI Driver for Kubernetes is only available with HPE Alletra 9000 and Primera storage systems.
+ 4 = Peer Persistence is an HPE Storage specific platform feature that isn't abstracted up the vSphere CSI Driver. Peer Persistence works with the vSphere CSI Driver when using VMFS datastores. +

+

Please refer to vSphere CSI Driver - Supported Features Matrix for the most up-to-date information.

+

Deployment

+

When considering to use block storage within Kubernetes clusters running on VMware, customers need to evaluate which data protocol (FC or iSCSI) is primarily used within their virtualized environment. This will help best determine which CSI driver can be deployed within your Kubernetes clusters.

+
+

Important

+

Due to limitations when exposing physical hardware (i.e. Fibre Channel Host Bus Adapters) to virtualized guest OSs and if iSCSI is not an available, HPE recommends the use of the VMware vSphere CSI driver to deliver block-based persistent storage from HPE Primera, Nimble Storage, Nimble Storage dHCI or 3PAR arrays to Kubernetes clusters within VMware environments for customers who are using the Fibre Channel protocol. +

+The HPE CSI Driver for Kubernetes does not support N_Port ID Virtualization (NPIV).

+
+ + + + + + + + + + + + + + + + + + + + +
ProtocolHPE CSI Driver for KubernetesvSphere CSI driver
FCNot supportedSupported*
iSCSISupportedSupported*
+

* = Limited to the SPBM implementation of the underlying storage array

+

Prerequisites

+

This guide will cover the configuration and deployment of the vSphere CSI driver. Cloud Native Storage for vSphere uses the VASA provider and Storage Policy Based Management (SPBM) to create First Class Disks on supported arrays.

+

CNS supports VMware vSphere 6.7 U3 and higher.

+
Configuring the VASA provider
+

Refer to the following guides to configure the VASA provider and create a vVol Datastore.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Storage ArrayGuide
HPE Alletra 9000HPE Alletra 9000: VMware ESXi Implementation Guide
HPE PrimeraVMware vVols with HPE Primera Storage
HPE Nimble StorageWorking with VMware Virtual Volumes
HPE Nimble Storage dHCI & HPE Alletra 5000/6000HPE Nimble Storage dHCI and VMware vSphere New Servers Deployment Guide
HPE 3PARImplementing VMware Virtual Volumes on HPE 3PAR StoreServ
+
Configuring a VM Storage Policy
+

Once the vVol Datastore is created, create a VM Storage Policy. From the vSphere Web Client, click Menu and select Policies and Profiles.

+

Select Policies and Profiles

+

Click on VM Storage Policies, and then click Create.

+

Create VM Storage Policy

+

Next provide a name for the policy. Click NEXT.

+

Specify name of Storage Policy

+

Under Datastore specific rules, select either:

+
    +
  • Enable rules for "NimbleStorage" storage
  • +
  • Enable rules for "HPE Primera" storage
  • +
+

Click NEXT.

+

Enable rules

+

Next click ADD RULE. Choose from the various options available to your array.

+

Add Rule

+

Below is an example of a VM Storage Policy for Primera. This may vary depending on your requirements and options available within your array. Once complete, click NEXT.

+

Add Rule

+

Under Storage compatibility, verify the correct vVol datastore is shown as compatible to the options chosen in the previous screen. Click NEXT.

+

Compatible Storage

+

Verify everything looks correct and click FINISH. Repeat this process for any additional Storage Policies you may need.

+

Click Finish

+

Now that we have configured a Storage Policy, we can proceed with the deployment of the vSphere CSI driver.

+

Install the vSphere Cloud Provider Interface (CPI)

+

This is adapted from the following tutorial, please read over to understand all of the vSphere, firewall and guest OS requirements.

+ +
+

Note

+

The following is a simplified single-site configuration to demonstrate how to deploy the vSphere CPI and CSI drivers. Make sure to adapt the configuration to match your environment and needs.

+
+
Check for ProviderID
+

Check if ProviderID is already configured on your cluster.

+

kubectl get nodes -o jsonpath='{range .items[*]}{.spec.providerID}{"\n"}{end}'
+

+

If this command returns empty, then proceed with configuring the vSphere Cloud Provider.

+

If the ProviderID is set, then you can proceed directly to installing the vSphere CSI Driver.

+

$ kubectl get nodes -o jsonpath='{range .items[*]}{.spec.providerID}{"\n"}{end}'
+vsphere://4238c1a1-e72f-74bf-db48-0d9f4da3e9c9
+vsphere://4238ede5-50e1-29b6-1337-be8746a5016c
+vsphere://4238c6dc-3806-ce36-fd14-5eefe830b227
+

+
Create a CPI ConfigMap
+

Create a vsphere.conf file.

+
+

Note

+

The vsphere.conf is a hardcoded filename used by the vSphere Cloud Provider. Do not change it otherwise the Cloud Provider will not deploy correctly.

+
+

Set the vCenter server FQDN or IP and vSphere datacenter object name to match your environment.

+

Copy and paste the following.

+

# Global properties in this section will be used for all specified vCenters unless overridden in vCenter section.
+global:
+  port: 443
+  # Set insecureFlag to true if the vCenter uses a self-signed cert
+  insecureFlag: true
+  # Where to find the Secret used for authentication to vCenter
+  secretName: cpi-global-secret
+  secretNamespace: kube-system
+
+# vcenter section
+vcenter:
+  tenant-k8s:
+    server: <vCenter FQDN or IP>
+    datacenters:
+      - <vCenter Datacenter name>
+

+

Create the ConfigMap from the vsphere.conf file.

+

kubectl create configmap cloud-config --from-file=vsphere.conf -n kube-system
+

+
Create a CPI Secret
+

The below YAML declarations are meant to be created with kubectl create. Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this:

+

kubectl create -f-
+< paste the YAML >
+^D (CTRL + D)
+

+

Next create the CPI Secret.

+

apiVersion: v1
+kind: Secret
+metadata:
+  name: cpi-global-secret
+  namespace: kube-system
+stringData:
+  <vCenter FQDN or IP>.username: "Administrator@vsphere.local"
+  <vCenter FQDN or IP>.password: "VMware1!"
+

+
+

Note

+

The username and password within the Secret are case-sensitive.

+
+

Inspect the Secret to verify it was created successfully.

+

kubectl describe secret cpi-global-secret -n kube-system
+

+

The output is similar to this:

+

Name:         cpi-global-secret
+Namespace:    kube-system
+Labels:       <none>
+Annotations:  <none>
+
+Type:  Opaque
+
+Data
+====
+vcenter.example.com.password:  8 bytes
+vcenter.example.com.username:  27 bytes
+

+
Check that all nodes are tainted
+

Before installing vSphere Cloud Controller Manager, make sure all nodes are tainted with node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule. When the kubelet is started with “external” cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud provider initializes this node, the kubelet removes this taint.

+

To find your node names, run the following command.

+

kubectl get nodes
+
+NAME    STATUS   ROLES                  AGE   VERSION
+cp1     Ready    control-plane,master   46m   v1.20.1
+node1   Ready    <none>                 44m   v1.20.1
+node2   Ready    <none>                 44m   v1.20.1
+

+

To create the taint, run the following command for each node in your cluster.

+

kubectl taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
+

+

Verify the taint has been applied to each node.

+

kubectl describe nodes | egrep "Taints:|Name:"
+

+

The output is similar to this:

+

Name:               cp1
+Taints:             node-role.kubernetes.io/master:NoSchedule
+Name:               node1
+Taints:             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
+Name:               node2
+Taints:             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
+

+
Deploy the CPI manifests
+

There are 3 manifests that must be deployed to install the vSphere Cloud Provider Interface (CPI). The following example applies the RBAC roles and the RBAC bindings to your Kubernetes cluster. It also deploys the Cloud Controller Manager in a DaemonSet.

+

kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml
+

+
Verify that the CPI has been successfully deployed
+

Verify vsphere-cloud-controller-manager is running.

+

kubectl rollout status ds/vsphere-cloud-controller-manager -n kube-system
+daemon set "vsphere-cloud-controller-manager" successfully rolled out
+

+
+

Note

+

If you happen to make an error with the vsphere.conf, simply delete the CPI components and the ConfigMap, make any necessary edits to the vsphere.conf file, and reapply the steps above.

+
+

Now that the CPI is installed, we can proceed with deploying the vSphere CSI driver.

+

Install the vSphere Container Storage Interface (CSI) driver

+

The following has been adapted from the vSphere CSI driver installation guide. Refer to the official documentation for additional information on how to deploy the vSphere CSI driver.

+ +
Create a configuration file with vSphere credentials
+

Since we are connecting to block storage provided from an HPE Primera, Nimble Storage, Nimble Storage dHCI or 3PAR array, we will create a configuration file for block volumes.

+

Create a csi-vsphere.conf file.

+

Copy and paste the following:

+

[Global]
+cluster-id = "csi-vsphere-cluster"
+
+[VirtualCenter "<IP or FQDN>"]
+insecure-flag = "true"
+user = "Administrator@vsphere.local"
+password = "VMware1!"
+port = "443"
+datacenters = "<vCenter datacenter>"
+

+
Create a Kubernetes Secret for vSphere credentials
+

Create a Kubernetes Secret that will contain the configuration details to connect to your vSphere environment.

+

kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf -n kube-system
+

+

Verify that the Secret was created successfully.

+

kubectl get secret vsphere-config-secret -n kube-system
+NAME                    TYPE     DATA   AGE
+vsphere-config-secret   Opaque   1      43s
+

+

For security purposes, it is advised to remove the csi-vsphere.conf file.

+
Create RBAC, vSphere CSI Controller Deployment and vSphere CSI node DaemonSet
+

Check the official vSphere CSI Driver Github repo for the latest version.

+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/deploy/vsphere-csi-controller-deployment.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/deploy/vsphere-csi-node-ds.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/rbac/vsphere-csi-controller-rbac.yaml
+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/deploy/vsphere-csi-controller-deployment.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/deploy/vsphere-csi-node-ds.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/rbac/vsphere-csi-controller-rbac.yaml
+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-controller-deployment.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-node-ds.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/rbac/vsphere-csi-controller-rbac.yaml
+
+
Verify the vSphere CSI driver deployment
+

Verify that the vSphere CSI driver has been successfully deployed using kubectl rollout status.

+

kubectl rollout status deployment/vsphere-csi-controller -n kube-system
+deployment "vsphere-csi-controller" successfully rolled out
+
+kubectl rollout status ds/vsphere-csi-node -n kube-system
+daemon set "vsphere-csi-node" successfully rolled out
+

+

Verify that the vSphere CSI driver CustomResourceDefinition has been registered with Kubernetes.

+

kubectl describe csidriver/csi.vsphere.vmware.com
+Name:         csi.vsphere.vmware.com
+Namespace:
+Labels:       <none>
+Annotations:  <none>
+API Version:  storage.k8s.io/v1
+Kind:         CSIDriver
+Metadata:
+  Creation Timestamp:  2020-11-21T06:27:23Z
+  Managed Fields:
+    API Version:  storage.k8s.io/v1beta1
+    Fields Type:  FieldsV1
+    fieldsV1:
+      f:metadata:
+        f:annotations:
+          .:
+          f:kubectl.kubernetes.io/last-applied-configuration:
+      f:spec:
+        f:attachRequired:
+        f:podInfoOnMount:
+        f:volumeLifecycleModes:
+    Manager:         kubectl-client-side-apply
+    Operation:       Update
+    Time:            2020-11-21T06:27:23Z
+  Resource Version:  217131
+  Self Link:         /apis/storage.k8s.io/v1/csidrivers/csi.vsphere.vmware.com
+  UID:               bcda2b5c-3c38-4256-9b91-5ed248395113
+Spec:
+  Attach Required:    true
+  Pod Info On Mount:  false
+  Volume Lifecycle Modes:
+    Persistent
+Events:  <none>
+

+

Also verify that the vSphere CSINodes CustomResourceDefinition has been created.

+

kubectl get csinodes -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.drivers[].name}{"\n"}{end}'
+cp1     csi.vsphere.vmware.com
+node1   csi.vsphere.vmware.com
+node2   csi.vsphere.vmware.com
+

+

If there are no errors, the vSphere CSI driver has been successfully deployed.

+
Create a StorageClass
+

With the vSphere CSI driver deployed, lets create a StorageClass that can be used by the CSI driver.

+
+

Important

+

The following steps will be using the example VM Storage Policy created at the beginning of this guide. If you do not have a Storage Policy available, refer to Configuring a VM Storage Policy before proceeding to the next steps.

+
+

kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+  name: primera-default-sc
+  annotations:
+    storageclass.kubernetes.io/is-default-class: "true"
+provisioner: csi.vsphere.vmware.com
+parameters:
+  storagepolicyname: "primera-default-profile"
+

+

Validate

+

With the vSphere CSI driver deployed and a StorageClass available, lets run through some tests to verify it is working correctly.

+

In this example, we will be deploying a stateful MongoDB application with 3 replicas. The persistent volumes deployed by the vSphere CSI driver will be created using the VM Storage Policy and placed on a compatible vVol datastore.

+
Create and Deploy a MongoDB Helm chart
+

This is an example MongoDB chart using a StatefulSet. The default volume size is 8Gi, if you want to change that use --set persistence.size=50Gi.

+

helm install mongodb \
+    --set architecture=replicaset \
+    --set replicaSetName=mongod \
+    --set replicaCount=3 \
+    --set auth.rootPassword=secretpassword \
+    --set auth.username=my-user \
+    --set auth.password=my-password \
+    --set auth.database=my-database \
+    bitnami/mongodb
+

+

Verify that the MongoDB application has been deployed. Wait for pods to start running and PVCs to be created for each replica.

+

kubectl rollout status sts/mongodb
+

+

Inspect the Pods and PersistentVolumeClaims.

+

kubectl get pods,pvc
+NAME       READY   STATUS    RESTARTS   AGE
+mongod-0   1/1     Running   0          90s
+mongod-1   1/1     Running   0          71s
+mongod-2   1/1     Running   0          44s
+
+NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
+datadir-mongodb-0   Bound    pvc-fd3994fb-a5fb-460b-ab17-608a71cdc337   50Gi       RWO            primera-default-sc   13m
+datadir-mongodb-1   Bound    pvc-a3755dbe-210d-4c7b-8ac1-bb0607a2c537   50Gi       RWO            primera-default-sc   13m
+datadir-mongodb-2   Bound    pvc-22bab0f4-8240-48c1-91b1-3495d038533e   50Gi       RWO            primera-default-sc   13m
+

+

To interact with the Mongo replica set, you can connect to the StatefulSet.

+

kubectl exec -it sts/mongod bash
+
+root@mongod-0:/# df -h /bitnami/mongodb
+Filesystem      Size  Used Avail Use% Mounted on
+/dev/sdb         49G  374M   47G   1% /bitnami/mongodb
+

+

We can see that the vSphere CSI driver has successfully provisioned and mounted the persistent volume to /bitnami/mongodb.

+
Verify Cloud Native Storage in vSphere
+

Verify that the volumes are now visible within the Cloud Native Storage interface by logging into the vSphere Web Client.

+

Click on Datacenter, then the Monitor tab. Expand Cloud Native Storage and highlight Container Volumes.

+

From here, we can see the persistent volumes that were created as part of our MongoDB deployment. These should match the kubectl get pvc output from earlier. You can also monitor their storage policy compliance status.

+

Container Volumes

+

This concludes the validations and verifies that all components of vSphere CNS (vSphere CPI and vSphere CSI drivers) are deployed and working correctly.

+

Support

+

VMware provides enterprise grade support for the vSphere CSI driver. Please use VMware Support Services to file a customer support ticket to engage the VMware global support team.

+

For support information on the HPE CSI Driver for Kubernetes, visit Support. For support with other HPE related technologies, visit the Hewlett Packard Enterprise Support Center.

+ +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/search.html b/search.html new file mode 100644 index 00000000..df5eb98d --- /dev/null +++ b/search.html @@ -0,0 +1,236 @@ + + + + + + + + + + + + + + + + + SCOD.HPEDEV.IO + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • +
  • +
+
+
+
+
+ + +

Search Results

+ + + +
+ Searching... +
+ + +
+
+ +
+ + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + diff --git a/search/lunr.js b/search/lunr.js new file mode 100644 index 00000000..6aa370fb --- /dev/null +++ b/search/lunr.js @@ -0,0 +1,3475 @@ +/** + * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9 + * Copyright (C) 2020 Oliver Nightingale + * @license MIT + */ + +;(function(){ + +/** + * A convenience function for configuring and constructing + * a new lunr Index. + * + * A lunr.Builder instance is created and the pipeline setup + * with a trimmer, stop word filter and stemmer. + * + * This builder object is yielded to the configuration function + * that is passed as a parameter, allowing the list of fields + * and other builder parameters to be customised. + * + * All documents _must_ be added within the passed config function. + * + * @example + * var idx = lunr(function () { + * this.field('title') + * this.field('body') + * this.ref('id') + * + * documents.forEach(function (doc) { + * this.add(doc) + * }, this) + * }) + * + * @see {@link lunr.Builder} + * @see {@link lunr.Pipeline} + * @see {@link lunr.trimmer} + * @see {@link lunr.stopWordFilter} + * @see {@link lunr.stemmer} + * @namespace {function} lunr + */ +var lunr = function (config) { + var builder = new lunr.Builder + + builder.pipeline.add( + lunr.trimmer, + lunr.stopWordFilter, + lunr.stemmer + ) + + builder.searchPipeline.add( + lunr.stemmer + ) + + config.call(builder, builder) + return builder.build() +} + +lunr.version = "2.3.9" +/*! + * lunr.utils + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A namespace containing utils for the rest of the lunr library + * @namespace lunr.utils + */ +lunr.utils = {} + +/** + * Print a warning message to the console. + * + * @param {String} message The message to be printed. + * @memberOf lunr.utils + * @function + */ +lunr.utils.warn = (function (global) { + /* eslint-disable no-console */ + return function (message) { + if (global.console && console.warn) { + console.warn(message) + } + } + /* eslint-enable no-console */ +})(this) + +/** + * Convert an object to a string. + * + * In the case of `null` and `undefined` the function returns + * the empty string, in all other cases the result of calling + * `toString` on the passed object is returned. + * + * @param {Any} obj The object to convert to a string. + * @return {String} string representation of the passed object. + * @memberOf lunr.utils + */ +lunr.utils.asString = function (obj) { + if (obj === void 0 || obj === null) { + return "" + } else { + return obj.toString() + } +} + +/** + * Clones an object. + * + * Will create a copy of an existing object such that any mutations + * on the copy cannot affect the original. + * + * Only shallow objects are supported, passing a nested object to this + * function will cause a TypeError. + * + * Objects with primitives, and arrays of primitives are supported. + * + * @param {Object} obj The object to clone. + * @return {Object} a clone of the passed object. + * @throws {TypeError} when a nested object is passed. + * @memberOf Utils + */ +lunr.utils.clone = function (obj) { + if (obj === null || obj === undefined) { + return obj + } + + var clone = Object.create(null), + keys = Object.keys(obj) + + for (var i = 0; i < keys.length; i++) { + var key = keys[i], + val = obj[key] + + if (Array.isArray(val)) { + clone[key] = val.slice() + continue + } + + if (typeof val === 'string' || + typeof val === 'number' || + typeof val === 'boolean') { + clone[key] = val + continue + } + + throw new TypeError("clone is not deep and does not support nested objects") + } + + return clone +} +lunr.FieldRef = function (docRef, fieldName, stringValue) { + this.docRef = docRef + this.fieldName = fieldName + this._stringValue = stringValue +} + +lunr.FieldRef.joiner = "/" + +lunr.FieldRef.fromString = function (s) { + var n = s.indexOf(lunr.FieldRef.joiner) + + if (n === -1) { + throw "malformed field ref string" + } + + var fieldRef = s.slice(0, n), + docRef = s.slice(n + 1) + + return new lunr.FieldRef (docRef, fieldRef, s) +} + +lunr.FieldRef.prototype.toString = function () { + if (this._stringValue == undefined) { + this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef + } + + return this._stringValue +} +/*! + * lunr.Set + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A lunr set. + * + * @constructor + */ +lunr.Set = function (elements) { + this.elements = Object.create(null) + + if (elements) { + this.length = elements.length + + for (var i = 0; i < this.length; i++) { + this.elements[elements[i]] = true + } + } else { + this.length = 0 + } +} + +/** + * A complete set that contains all elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.complete = { + intersect: function (other) { + return other + }, + + union: function () { + return this + }, + + contains: function () { + return true + } +} + +/** + * An empty set that contains no elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.empty = { + intersect: function () { + return this + }, + + union: function (other) { + return other + }, + + contains: function () { + return false + } +} + +/** + * Returns true if this set contains the specified object. + * + * @param {object} object - Object whose presence in this set is to be tested. + * @returns {boolean} - True if this set contains the specified object. + */ +lunr.Set.prototype.contains = function (object) { + return !!this.elements[object] +} + +/** + * Returns a new set containing only the elements that are present in both + * this set and the specified set. + * + * @param {lunr.Set} other - set to intersect with this set. + * @returns {lunr.Set} a new set that is the intersection of this and the specified set. + */ + +lunr.Set.prototype.intersect = function (other) { + var a, b, elements, intersection = [] + + if (other === lunr.Set.complete) { + return this + } + + if (other === lunr.Set.empty) { + return other + } + + if (this.length < other.length) { + a = this + b = other + } else { + a = other + b = this + } + + elements = Object.keys(a.elements) + + for (var i = 0; i < elements.length; i++) { + var element = elements[i] + if (element in b.elements) { + intersection.push(element) + } + } + + return new lunr.Set (intersection) +} + +/** + * Returns a new set combining the elements of this and the specified set. + * + * @param {lunr.Set} other - set to union with this set. + * @return {lunr.Set} a new set that is the union of this and the specified set. + */ + +lunr.Set.prototype.union = function (other) { + if (other === lunr.Set.complete) { + return lunr.Set.complete + } + + if (other === lunr.Set.empty) { + return this + } + + return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements))) +} +/** + * A function to calculate the inverse document frequency for + * a posting. This is shared between the builder and the index + * + * @private + * @param {object} posting - The posting for a given term + * @param {number} documentCount - The total number of documents. + */ +lunr.idf = function (posting, documentCount) { + var documentsWithTerm = 0 + + for (var fieldName in posting) { + if (fieldName == '_index') continue // Ignore the term index, its not a field + documentsWithTerm += Object.keys(posting[fieldName]).length + } + + var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5) + + return Math.log(1 + Math.abs(x)) +} + +/** + * A token wraps a string representation of a token + * as it is passed through the text processing pipeline. + * + * @constructor + * @param {string} [str=''] - The string token being wrapped. + * @param {object} [metadata={}] - Metadata associated with this token. + */ +lunr.Token = function (str, metadata) { + this.str = str || "" + this.metadata = metadata || {} +} + +/** + * Returns the token string that is being wrapped by this object. + * + * @returns {string} + */ +lunr.Token.prototype.toString = function () { + return this.str +} + +/** + * A token update function is used when updating or optionally + * when cloning a token. + * + * @callback lunr.Token~updateFunction + * @param {string} str - The string representation of the token. + * @param {Object} metadata - All metadata associated with this token. + */ + +/** + * Applies the given function to the wrapped string token. + * + * @example + * token.update(function (str, metadata) { + * return str.toUpperCase() + * }) + * + * @param {lunr.Token~updateFunction} fn - A function to apply to the token string. + * @returns {lunr.Token} + */ +lunr.Token.prototype.update = function (fn) { + this.str = fn(this.str, this.metadata) + return this +} + +/** + * Creates a clone of this token. Optionally a function can be + * applied to the cloned token. + * + * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token. + * @returns {lunr.Token} + */ +lunr.Token.prototype.clone = function (fn) { + fn = fn || function (s) { return s } + return new lunr.Token (fn(this.str, this.metadata), this.metadata) +} +/*! + * lunr.tokenizer + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A function for splitting a string into tokens ready to be inserted into + * the search index. Uses `lunr.tokenizer.separator` to split strings, change + * the value of this property to change how strings are split into tokens. + * + * This tokenizer will convert its parameter to a string by calling `toString` and + * then will split this string on the character in `lunr.tokenizer.separator`. + * Arrays will have their elements converted to strings and wrapped in a lunr.Token. + * + * Optional metadata can be passed to the tokenizer, this metadata will be cloned and + * added as metadata to every token that is created from the object to be tokenized. + * + * @static + * @param {?(string|object|object[])} obj - The object to convert into tokens + * @param {?object} metadata - Optional metadata to associate with every token + * @returns {lunr.Token[]} + * @see {@link lunr.Pipeline} + */ +lunr.tokenizer = function (obj, metadata) { + if (obj == null || obj == undefined) { + return [] + } + + if (Array.isArray(obj)) { + return obj.map(function (t) { + return new lunr.Token( + lunr.utils.asString(t).toLowerCase(), + lunr.utils.clone(metadata) + ) + }) + } + + var str = obj.toString().toLowerCase(), + len = str.length, + tokens = [] + + for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) { + var char = str.charAt(sliceEnd), + sliceLength = sliceEnd - sliceStart + + if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) { + + if (sliceLength > 0) { + var tokenMetadata = lunr.utils.clone(metadata) || {} + tokenMetadata["position"] = [sliceStart, sliceLength] + tokenMetadata["index"] = tokens.length + + tokens.push( + new lunr.Token ( + str.slice(sliceStart, sliceEnd), + tokenMetadata + ) + ) + } + + sliceStart = sliceEnd + 1 + } + + } + + return tokens +} + +/** + * The separator used to split a string into tokens. Override this property to change the behaviour of + * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens. + * + * @static + * @see lunr.tokenizer + */ +lunr.tokenizer.separator = /[\s\-]+/ +/*! + * lunr.Pipeline + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.Pipelines maintain an ordered list of functions to be applied to all + * tokens in documents entering the search index and queries being ran against + * the index. + * + * An instance of lunr.Index created with the lunr shortcut will contain a + * pipeline with a stop word filter and an English language stemmer. Extra + * functions can be added before or after either of these functions or these + * default functions can be removed. + * + * When run the pipeline will call each function in turn, passing a token, the + * index of that token in the original list of all tokens and finally a list of + * all the original tokens. + * + * The output of functions in the pipeline will be passed to the next function + * in the pipeline. To exclude a token from entering the index the function + * should return undefined, the rest of the pipeline will not be called with + * this token. + * + * For serialisation of pipelines to work, all functions used in an instance of + * a pipeline should be registered with lunr.Pipeline. Registered functions can + * then be loaded. If trying to load a serialised pipeline that uses functions + * that are not registered an error will be thrown. + * + * If not planning on serialising the pipeline then registering pipeline functions + * is not necessary. + * + * @constructor + */ +lunr.Pipeline = function () { + this._stack = [] +} + +lunr.Pipeline.registeredFunctions = Object.create(null) + +/** + * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token + * string as well as all known metadata. A pipeline function can mutate the token string + * or mutate (or add) metadata for a given token. + * + * A pipeline function can indicate that the passed token should be discarded by returning + * null, undefined or an empty string. This token will not be passed to any downstream pipeline + * functions and will not be added to the index. + * + * Multiple tokens can be returned by returning an array of tokens. Each token will be passed + * to any downstream pipeline functions and all will returned tokens will be added to the index. + * + * Any number of pipeline functions may be chained together using a lunr.Pipeline. + * + * @interface lunr.PipelineFunction + * @param {lunr.Token} token - A token from the document being processed. + * @param {number} i - The index of this token in the complete list of tokens for this document/field. + * @param {lunr.Token[]} tokens - All tokens for this document/field. + * @returns {(?lunr.Token|lunr.Token[])} + */ + +/** + * Register a function with the pipeline. + * + * Functions that are used in the pipeline should be registered if the pipeline + * needs to be serialised, or a serialised pipeline needs to be loaded. + * + * Registering a function does not add it to a pipeline, functions must still be + * added to instances of the pipeline for them to be used when running a pipeline. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @param {String} label - The label to register this function with + */ +lunr.Pipeline.registerFunction = function (fn, label) { + if (label in this.registeredFunctions) { + lunr.utils.warn('Overwriting existing registered function: ' + label) + } + + fn.label = label + lunr.Pipeline.registeredFunctions[fn.label] = fn +} + +/** + * Warns if the function is not registered as a Pipeline function. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @private + */ +lunr.Pipeline.warnIfFunctionNotRegistered = function (fn) { + var isRegistered = fn.label && (fn.label in this.registeredFunctions) + + if (!isRegistered) { + lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\n', fn) + } +} + +/** + * Loads a previously serialised pipeline. + * + * All functions to be loaded must already be registered with lunr.Pipeline. + * If any function from the serialised data has not been registered then an + * error will be thrown. + * + * @param {Object} serialised - The serialised pipeline to load. + * @returns {lunr.Pipeline} + */ +lunr.Pipeline.load = function (serialised) { + var pipeline = new lunr.Pipeline + + serialised.forEach(function (fnName) { + var fn = lunr.Pipeline.registeredFunctions[fnName] + + if (fn) { + pipeline.add(fn) + } else { + throw new Error('Cannot load unregistered function: ' + fnName) + } + }) + + return pipeline +} + +/** + * Adds new functions to the end of the pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline. + */ +lunr.Pipeline.prototype.add = function () { + var fns = Array.prototype.slice.call(arguments) + + fns.forEach(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + this._stack.push(fn) + }, this) +} + +/** + * Adds a single function after a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.after = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + pos = pos + 1 + this._stack.splice(pos, 0, newFn) +} + +/** + * Adds a single function before a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.before = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + this._stack.splice(pos, 0, newFn) +} + +/** + * Removes a function from the pipeline. + * + * @param {lunr.PipelineFunction} fn The function to remove from the pipeline. + */ +lunr.Pipeline.prototype.remove = function (fn) { + var pos = this._stack.indexOf(fn) + if (pos == -1) { + return + } + + this._stack.splice(pos, 1) +} + +/** + * Runs the current list of functions that make up the pipeline against the + * passed tokens. + * + * @param {Array} tokens The tokens to run through the pipeline. + * @returns {Array} + */ +lunr.Pipeline.prototype.run = function (tokens) { + var stackLength = this._stack.length + + for (var i = 0; i < stackLength; i++) { + var fn = this._stack[i] + var memo = [] + + for (var j = 0; j < tokens.length; j++) { + var result = fn(tokens[j], j, tokens) + + if (result === null || result === void 0 || result === '') continue + + if (Array.isArray(result)) { + for (var k = 0; k < result.length; k++) { + memo.push(result[k]) + } + } else { + memo.push(result) + } + } + + tokens = memo + } + + return tokens +} + +/** + * Convenience method for passing a string through a pipeline and getting + * strings out. This method takes care of wrapping the passed string in a + * token and mapping the resulting tokens back to strings. + * + * @param {string} str - The string to pass through the pipeline. + * @param {?object} metadata - Optional metadata to associate with the token + * passed to the pipeline. + * @returns {string[]} + */ +lunr.Pipeline.prototype.runString = function (str, metadata) { + var token = new lunr.Token (str, metadata) + + return this.run([token]).map(function (t) { + return t.toString() + }) +} + +/** + * Resets the pipeline by removing any existing processors. + * + */ +lunr.Pipeline.prototype.reset = function () { + this._stack = [] +} + +/** + * Returns a representation of the pipeline ready for serialisation. + * + * Logs a warning if the function has not been registered. + * + * @returns {Array} + */ +lunr.Pipeline.prototype.toJSON = function () { + return this._stack.map(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + + return fn.label + }) +} +/*! + * lunr.Vector + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A vector is used to construct the vector space of documents and queries. These + * vectors support operations to determine the similarity between two documents or + * a document and a query. + * + * Normally no parameters are required for initializing a vector, but in the case of + * loading a previously dumped vector the raw elements can be provided to the constructor. + * + * For performance reasons vectors are implemented with a flat array, where an elements + * index is immediately followed by its value. E.g. [index, value, index, value]. This + * allows the underlying array to be as sparse as possible and still offer decent + * performance when being used for vector calculations. + * + * @constructor + * @param {Number[]} [elements] - The flat list of element index and element value pairs. + */ +lunr.Vector = function (elements) { + this._magnitude = 0 + this.elements = elements || [] +} + + +/** + * Calculates the position within the vector to insert a given index. + * + * This is used internally by insert and upsert. If there are duplicate indexes then + * the position is returned as if the value for that index were to be updated, but it + * is the callers responsibility to check whether there is a duplicate at that index + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @returns {Number} + */ +lunr.Vector.prototype.positionForIndex = function (index) { + // For an empty vector the tuple can be inserted at the beginning + if (this.elements.length == 0) { + return 0 + } + + var start = 0, + end = this.elements.length / 2, + sliceLength = end - start, + pivotPoint = Math.floor(sliceLength / 2), + pivotIndex = this.elements[pivotPoint * 2] + + while (sliceLength > 1) { + if (pivotIndex < index) { + start = pivotPoint + } + + if (pivotIndex > index) { + end = pivotPoint + } + + if (pivotIndex == index) { + break + } + + sliceLength = end - start + pivotPoint = start + Math.floor(sliceLength / 2) + pivotIndex = this.elements[pivotPoint * 2] + } + + if (pivotIndex == index) { + return pivotPoint * 2 + } + + if (pivotIndex > index) { + return pivotPoint * 2 + } + + if (pivotIndex < index) { + return (pivotPoint + 1) * 2 + } +} + +/** + * Inserts an element at an index within the vector. + * + * Does not allow duplicates, will throw an error if there is already an entry + * for this index. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + */ +lunr.Vector.prototype.insert = function (insertIdx, val) { + this.upsert(insertIdx, val, function () { + throw "duplicate index" + }) +} + +/** + * Inserts or updates an existing index within the vector. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + * @param {function} fn - A function that is called for updates, the existing value and the + * requested value are passed as arguments + */ +lunr.Vector.prototype.upsert = function (insertIdx, val, fn) { + this._magnitude = 0 + var position = this.positionForIndex(insertIdx) + + if (this.elements[position] == insertIdx) { + this.elements[position + 1] = fn(this.elements[position + 1], val) + } else { + this.elements.splice(position, 0, insertIdx, val) + } +} + +/** + * Calculates the magnitude of this vector. + * + * @returns {Number} + */ +lunr.Vector.prototype.magnitude = function () { + if (this._magnitude) return this._magnitude + + var sumOfSquares = 0, + elementsLength = this.elements.length + + for (var i = 1; i < elementsLength; i += 2) { + var val = this.elements[i] + sumOfSquares += val * val + } + + return this._magnitude = Math.sqrt(sumOfSquares) +} + +/** + * Calculates the dot product of this vector and another vector. + * + * @param {lunr.Vector} otherVector - The vector to compute the dot product with. + * @returns {Number} + */ +lunr.Vector.prototype.dot = function (otherVector) { + var dotProduct = 0, + a = this.elements, b = otherVector.elements, + aLen = a.length, bLen = b.length, + aVal = 0, bVal = 0, + i = 0, j = 0 + + while (i < aLen && j < bLen) { + aVal = a[i], bVal = b[j] + if (aVal < bVal) { + i += 2 + } else if (aVal > bVal) { + j += 2 + } else if (aVal == bVal) { + dotProduct += a[i + 1] * b[j + 1] + i += 2 + j += 2 + } + } + + return dotProduct +} + +/** + * Calculates the similarity between this vector and another vector. + * + * @param {lunr.Vector} otherVector - The other vector to calculate the + * similarity with. + * @returns {Number} + */ +lunr.Vector.prototype.similarity = function (otherVector) { + return this.dot(otherVector) / this.magnitude() || 0 +} + +/** + * Converts the vector to an array of the elements within the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toArray = function () { + var output = new Array (this.elements.length / 2) + + for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) { + output[j] = this.elements[i] + } + + return output +} + +/** + * A JSON serializable representation of the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toJSON = function () { + return this.elements +} +/* eslint-disable */ +/*! + * lunr.stemmer + * Copyright (C) 2020 Oliver Nightingale + * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt + */ + +/** + * lunr.stemmer is an english language stemmer, this is a JavaScript + * implementation of the PorterStemmer taken from http://tartarus.org/~martin + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token - The string to stem + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + * @function + */ +lunr.stemmer = (function(){ + var step2list = { + "ational" : "ate", + "tional" : "tion", + "enci" : "ence", + "anci" : "ance", + "izer" : "ize", + "bli" : "ble", + "alli" : "al", + "entli" : "ent", + "eli" : "e", + "ousli" : "ous", + "ization" : "ize", + "ation" : "ate", + "ator" : "ate", + "alism" : "al", + "iveness" : "ive", + "fulness" : "ful", + "ousness" : "ous", + "aliti" : "al", + "iviti" : "ive", + "biliti" : "ble", + "logi" : "log" + }, + + step3list = { + "icate" : "ic", + "ative" : "", + "alize" : "al", + "iciti" : "ic", + "ical" : "ic", + "ful" : "", + "ness" : "" + }, + + c = "[^aeiou]", // consonant + v = "[aeiouy]", // vowel + C = c + "[^aeiouy]*", // consonant sequence + V = v + "[aeiou]*", // vowel sequence + + mgr0 = "^(" + C + ")?" + V + C, // [C]VC... is m>0 + meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$", // [C]VC[V] is m=1 + mgr1 = "^(" + C + ")?" + V + C + V + C, // [C]VCVC... is m>1 + s_v = "^(" + C + ")?" + v; // vowel in stem + + var re_mgr0 = new RegExp(mgr0); + var re_mgr1 = new RegExp(mgr1); + var re_meq1 = new RegExp(meq1); + var re_s_v = new RegExp(s_v); + + var re_1a = /^(.+?)(ss|i)es$/; + var re2_1a = /^(.+?)([^s])s$/; + var re_1b = /^(.+?)eed$/; + var re2_1b = /^(.+?)(ed|ing)$/; + var re_1b_2 = /.$/; + var re2_1b_2 = /(at|bl|iz)$/; + var re3_1b_2 = new RegExp("([^aeiouylsz])\\1$"); + var re4_1b_2 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var re_1c = /^(.+?[^aeiou])y$/; + var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + + var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + + var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + var re2_4 = /^(.+?)(s|t)(ion)$/; + + var re_5 = /^(.+?)e$/; + var re_5_1 = /ll$/; + var re3_5 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var porterStemmer = function porterStemmer(w) { + var stem, + suffix, + firstch, + re, + re2, + re3, + re4; + + if (w.length < 3) { return w; } + + firstch = w.substr(0,1); + if (firstch == "y") { + w = firstch.toUpperCase() + w.substr(1); + } + + // Step 1a + re = re_1a + re2 = re2_1a; + + if (re.test(w)) { w = w.replace(re,"$1$2"); } + else if (re2.test(w)) { w = w.replace(re2,"$1$2"); } + + // Step 1b + re = re_1b; + re2 = re2_1b; + if (re.test(w)) { + var fp = re.exec(w); + re = re_mgr0; + if (re.test(fp[1])) { + re = re_1b_2; + w = w.replace(re,""); + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = re_s_v; + if (re2.test(stem)) { + w = stem; + re2 = re2_1b_2; + re3 = re3_1b_2; + re4 = re4_1b_2; + if (re2.test(w)) { w = w + "e"; } + else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,""); } + else if (re4.test(w)) { w = w + "e"; } + } + } + + // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say) + re = re_1c; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + w = stem + "i"; + } + + // Step 2 + re = re_2; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step2list[suffix]; + } + } + + // Step 3 + re = re_3; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step3list[suffix]; + } + } + + // Step 4 + re = re_4; + re2 = re2_4; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + if (re.test(stem)) { + w = stem; + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = re_mgr1; + if (re2.test(stem)) { + w = stem; + } + } + + // Step 5 + re = re_5; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + re2 = re_meq1; + re3 = re3_5; + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) { + w = stem; + } + } + + re = re_5_1; + re2 = re_mgr1; + if (re.test(w) && re2.test(w)) { + re = re_1b_2; + w = w.replace(re,""); + } + + // and turn initial Y back to y + + if (firstch == "y") { + w = firstch.toLowerCase() + w.substr(1); + } + + return w; + }; + + return function (token) { + return token.update(porterStemmer); + } +})(); + +lunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer') +/*! + * lunr.stopWordFilter + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.generateStopWordFilter builds a stopWordFilter function from the provided + * list of stop words. + * + * The built in lunr.stopWordFilter is built using this generator and can be used + * to generate custom stopWordFilters for applications or non English languages. + * + * @function + * @param {Array} token The token to pass through the filter + * @returns {lunr.PipelineFunction} + * @see lunr.Pipeline + * @see lunr.stopWordFilter + */ +lunr.generateStopWordFilter = function (stopWords) { + var words = stopWords.reduce(function (memo, stopWord) { + memo[stopWord] = stopWord + return memo + }, {}) + + return function (token) { + if (token && words[token.toString()] !== token.toString()) return token + } +} + +/** + * lunr.stopWordFilter is an English language stop word list filter, any words + * contained in the list will not be passed through the filter. + * + * This is intended to be used in the Pipeline. If the token does not pass the + * filter then undefined will be returned. + * + * @function + * @implements {lunr.PipelineFunction} + * @params {lunr.Token} token - A token to check for being a stop word. + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + */ +lunr.stopWordFilter = lunr.generateStopWordFilter([ + 'a', + 'able', + 'about', + 'across', + 'after', + 'all', + 'almost', + 'also', + 'am', + 'among', + 'an', + 'and', + 'any', + 'are', + 'as', + 'at', + 'be', + 'because', + 'been', + 'but', + 'by', + 'can', + 'cannot', + 'could', + 'dear', + 'did', + 'do', + 'does', + 'either', + 'else', + 'ever', + 'every', + 'for', + 'from', + 'get', + 'got', + 'had', + 'has', + 'have', + 'he', + 'her', + 'hers', + 'him', + 'his', + 'how', + 'however', + 'i', + 'if', + 'in', + 'into', + 'is', + 'it', + 'its', + 'just', + 'least', + 'let', + 'like', + 'likely', + 'may', + 'me', + 'might', + 'most', + 'must', + 'my', + 'neither', + 'no', + 'nor', + 'not', + 'of', + 'off', + 'often', + 'on', + 'only', + 'or', + 'other', + 'our', + 'own', + 'rather', + 'said', + 'say', + 'says', + 'she', + 'should', + 'since', + 'so', + 'some', + 'than', + 'that', + 'the', + 'their', + 'them', + 'then', + 'there', + 'these', + 'they', + 'this', + 'tis', + 'to', + 'too', + 'twas', + 'us', + 'wants', + 'was', + 'we', + 'were', + 'what', + 'when', + 'where', + 'which', + 'while', + 'who', + 'whom', + 'why', + 'will', + 'with', + 'would', + 'yet', + 'you', + 'your' +]) + +lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter') +/*! + * lunr.trimmer + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.trimmer is a pipeline function for trimming non word + * characters from the beginning and end of tokens before they + * enter the index. + * + * This implementation may not work correctly for non latin + * characters and should either be removed or adapted for use + * with languages with non-latin characters. + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token The token to pass through the filter + * @returns {lunr.Token} + * @see lunr.Pipeline + */ +lunr.trimmer = function (token) { + return token.update(function (s) { + return s.replace(/^\W+/, '').replace(/\W+$/, '') + }) +} + +lunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer') +/*! + * lunr.TokenSet + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A token set is used to store the unique list of all tokens + * within an index. Token sets are also used to represent an + * incoming query to the index, this query token set and index + * token set are then intersected to find which tokens to look + * up in the inverted index. + * + * A token set can hold multiple tokens, as in the case of the + * index token set, or it can hold a single token as in the + * case of a simple query token set. + * + * Additionally token sets are used to perform wildcard matching. + * Leading, contained and trailing wildcards are supported, and + * from this edit distance matching can also be provided. + * + * Token sets are implemented as a minimal finite state automata, + * where both common prefixes and suffixes are shared between tokens. + * This helps to reduce the space used for storing the token set. + * + * @constructor + */ +lunr.TokenSet = function () { + this.final = false + this.edges = {} + this.id = lunr.TokenSet._nextId + lunr.TokenSet._nextId += 1 +} + +/** + * Keeps track of the next, auto increment, identifier to assign + * to a new tokenSet. + * + * TokenSets require a unique identifier to be correctly minimised. + * + * @private + */ +lunr.TokenSet._nextId = 1 + +/** + * Creates a TokenSet instance from the given sorted array of words. + * + * @param {String[]} arr - A sorted array of strings to create the set from. + * @returns {lunr.TokenSet} + * @throws Will throw an error if the input array is not sorted. + */ +lunr.TokenSet.fromArray = function (arr) { + var builder = new lunr.TokenSet.Builder + + for (var i = 0, len = arr.length; i < len; i++) { + builder.insert(arr[i]) + } + + builder.finish() + return builder.root +} + +/** + * Creates a token set from a query clause. + * + * @private + * @param {Object} clause - A single clause from lunr.Query. + * @param {string} clause.term - The query clause term. + * @param {number} [clause.editDistance] - The optional edit distance for the term. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromClause = function (clause) { + if ('editDistance' in clause) { + return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance) + } else { + return lunr.TokenSet.fromString(clause.term) + } +} + +/** + * Creates a token set representing a single string with a specified + * edit distance. + * + * Insertions, deletions, substitutions and transpositions are each + * treated as an edit distance of 1. + * + * Increasing the allowed edit distance will have a dramatic impact + * on the performance of both creating and intersecting these TokenSets. + * It is advised to keep the edit distance less than 3. + * + * @param {string} str - The string to create the token set from. + * @param {number} editDistance - The allowed edit distance to match. + * @returns {lunr.Vector} + */ +lunr.TokenSet.fromFuzzyString = function (str, editDistance) { + var root = new lunr.TokenSet + + var stack = [{ + node: root, + editsRemaining: editDistance, + str: str + }] + + while (stack.length) { + var frame = stack.pop() + + // no edit + if (frame.str.length > 0) { + var char = frame.str.charAt(0), + noEditNode + + if (char in frame.node.edges) { + noEditNode = frame.node.edges[char] + } else { + noEditNode = new lunr.TokenSet + frame.node.edges[char] = noEditNode + } + + if (frame.str.length == 1) { + noEditNode.final = true + } + + stack.push({ + node: noEditNode, + editsRemaining: frame.editsRemaining, + str: frame.str.slice(1) + }) + } + + if (frame.editsRemaining == 0) { + continue + } + + // insertion + if ("*" in frame.node.edges) { + var insertionNode = frame.node.edges["*"] + } else { + var insertionNode = new lunr.TokenSet + frame.node.edges["*"] = insertionNode + } + + if (frame.str.length == 0) { + insertionNode.final = true + } + + stack.push({ + node: insertionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str + }) + + // deletion + // can only do a deletion if we have enough edits remaining + // and if there are characters left to delete in the string + if (frame.str.length > 1) { + stack.push({ + node: frame.node, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // deletion + // just removing the last character from the str + if (frame.str.length == 1) { + frame.node.final = true + } + + // substitution + // can only do a substitution if we have enough edits remaining + // and if there are characters left to substitute + if (frame.str.length >= 1) { + if ("*" in frame.node.edges) { + var substitutionNode = frame.node.edges["*"] + } else { + var substitutionNode = new lunr.TokenSet + frame.node.edges["*"] = substitutionNode + } + + if (frame.str.length == 1) { + substitutionNode.final = true + } + + stack.push({ + node: substitutionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // transposition + // can only do a transposition if there are edits remaining + // and there are enough characters to transpose + if (frame.str.length > 1) { + var charA = frame.str.charAt(0), + charB = frame.str.charAt(1), + transposeNode + + if (charB in frame.node.edges) { + transposeNode = frame.node.edges[charB] + } else { + transposeNode = new lunr.TokenSet + frame.node.edges[charB] = transposeNode + } + + if (frame.str.length == 1) { + transposeNode.final = true + } + + stack.push({ + node: transposeNode, + editsRemaining: frame.editsRemaining - 1, + str: charA + frame.str.slice(2) + }) + } + } + + return root +} + +/** + * Creates a TokenSet from a string. + * + * The string may contain one or more wildcard characters (*) + * that will allow wildcard matching when intersecting with + * another TokenSet. + * + * @param {string} str - The string to create a TokenSet from. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromString = function (str) { + var node = new lunr.TokenSet, + root = node + + /* + * Iterates through all characters within the passed string + * appending a node for each character. + * + * When a wildcard character is found then a self + * referencing edge is introduced to continually match + * any number of any characters. + */ + for (var i = 0, len = str.length; i < len; i++) { + var char = str[i], + final = (i == len - 1) + + if (char == "*") { + node.edges[char] = node + node.final = final + + } else { + var next = new lunr.TokenSet + next.final = final + + node.edges[char] = next + node = next + } + } + + return root +} + +/** + * Converts this TokenSet into an array of strings + * contained within the TokenSet. + * + * This is not intended to be used on a TokenSet that + * contains wildcards, in these cases the results are + * undefined and are likely to cause an infinite loop. + * + * @returns {string[]} + */ +lunr.TokenSet.prototype.toArray = function () { + var words = [] + + var stack = [{ + prefix: "", + node: this + }] + + while (stack.length) { + var frame = stack.pop(), + edges = Object.keys(frame.node.edges), + len = edges.length + + if (frame.node.final) { + /* In Safari, at this point the prefix is sometimes corrupted, see: + * https://github.com/olivernn/lunr.js/issues/279 Calling any + * String.prototype method forces Safari to "cast" this string to what + * it's supposed to be, fixing the bug. */ + frame.prefix.charAt(0) + words.push(frame.prefix) + } + + for (var i = 0; i < len; i++) { + var edge = edges[i] + + stack.push({ + prefix: frame.prefix.concat(edge), + node: frame.node.edges[edge] + }) + } + } + + return words +} + +/** + * Generates a string representation of a TokenSet. + * + * This is intended to allow TokenSets to be used as keys + * in objects, largely to aid the construction and minimisation + * of a TokenSet. As such it is not designed to be a human + * friendly representation of the TokenSet. + * + * @returns {string} + */ +lunr.TokenSet.prototype.toString = function () { + // NOTE: Using Object.keys here as this.edges is very likely + // to enter 'hash-mode' with many keys being added + // + // avoiding a for-in loop here as it leads to the function + // being de-optimised (at least in V8). From some simple + // benchmarks the performance is comparable, but allowing + // V8 to optimize may mean easy performance wins in the future. + + if (this._str) { + return this._str + } + + var str = this.final ? '1' : '0', + labels = Object.keys(this.edges).sort(), + len = labels.length + + for (var i = 0; i < len; i++) { + var label = labels[i], + node = this.edges[label] + + str = str + label + node.id + } + + return str +} + +/** + * Returns a new TokenSet that is the intersection of + * this TokenSet and the passed TokenSet. + * + * This intersection will take into account any wildcards + * contained within the TokenSet. + * + * @param {lunr.TokenSet} b - An other TokenSet to intersect with. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.prototype.intersect = function (b) { + var output = new lunr.TokenSet, + frame = undefined + + var stack = [{ + qNode: b, + output: output, + node: this + }] + + while (stack.length) { + frame = stack.pop() + + // NOTE: As with the #toString method, we are using + // Object.keys and a for loop instead of a for-in loop + // as both of these objects enter 'hash' mode, causing + // the function to be de-optimised in V8 + var qEdges = Object.keys(frame.qNode.edges), + qLen = qEdges.length, + nEdges = Object.keys(frame.node.edges), + nLen = nEdges.length + + for (var q = 0; q < qLen; q++) { + var qEdge = qEdges[q] + + for (var n = 0; n < nLen; n++) { + var nEdge = nEdges[n] + + if (nEdge == qEdge || qEdge == '*') { + var node = frame.node.edges[nEdge], + qNode = frame.qNode.edges[qEdge], + final = node.final && qNode.final, + next = undefined + + if (nEdge in frame.output.edges) { + // an edge already exists for this character + // no need to create a new node, just set the finality + // bit unless this node is already final + next = frame.output.edges[nEdge] + next.final = next.final || final + + } else { + // no edge exists yet, must create one + // set the finality bit and insert it + // into the output + next = new lunr.TokenSet + next.final = final + frame.output.edges[nEdge] = next + } + + stack.push({ + qNode: qNode, + output: next, + node: node + }) + } + } + } + } + + return output +} +lunr.TokenSet.Builder = function () { + this.previousWord = "" + this.root = new lunr.TokenSet + this.uncheckedNodes = [] + this.minimizedNodes = {} +} + +lunr.TokenSet.Builder.prototype.insert = function (word) { + var node, + commonPrefix = 0 + + if (word < this.previousWord) { + throw new Error ("Out of order word insertion") + } + + for (var i = 0; i < word.length && i < this.previousWord.length; i++) { + if (word[i] != this.previousWord[i]) break + commonPrefix++ + } + + this.minimize(commonPrefix) + + if (this.uncheckedNodes.length == 0) { + node = this.root + } else { + node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child + } + + for (var i = commonPrefix; i < word.length; i++) { + var nextNode = new lunr.TokenSet, + char = word[i] + + node.edges[char] = nextNode + + this.uncheckedNodes.push({ + parent: node, + char: char, + child: nextNode + }) + + node = nextNode + } + + node.final = true + this.previousWord = word +} + +lunr.TokenSet.Builder.prototype.finish = function () { + this.minimize(0) +} + +lunr.TokenSet.Builder.prototype.minimize = function (downTo) { + for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) { + var node = this.uncheckedNodes[i], + childKey = node.child.toString() + + if (childKey in this.minimizedNodes) { + node.parent.edges[node.char] = this.minimizedNodes[childKey] + } else { + // Cache the key for this node since + // we know it can't change anymore + node.child._str = childKey + + this.minimizedNodes[childKey] = node.child + } + + this.uncheckedNodes.pop() + } +} +/*! + * lunr.Index + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * An index contains the built index of all documents and provides a query interface + * to the index. + * + * Usually instances of lunr.Index will not be created using this constructor, instead + * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be + * used to load previously built and serialized indexes. + * + * @constructor + * @param {Object} attrs - The attributes of the built search index. + * @param {Object} attrs.invertedIndex - An index of term/field to document reference. + * @param {Object} attrs.fieldVectors - Field vectors + * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens. + * @param {string[]} attrs.fields - The names of indexed document fields. + * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms. + */ +lunr.Index = function (attrs) { + this.invertedIndex = attrs.invertedIndex + this.fieldVectors = attrs.fieldVectors + this.tokenSet = attrs.tokenSet + this.fields = attrs.fields + this.pipeline = attrs.pipeline +} + +/** + * A result contains details of a document matching a search query. + * @typedef {Object} lunr.Index~Result + * @property {string} ref - The reference of the document this result represents. + * @property {number} score - A number between 0 and 1 representing how similar this document is to the query. + * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match. + */ + +/** + * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple + * query language which itself is parsed into an instance of lunr.Query. + * + * For programmatically building queries it is advised to directly use lunr.Query, the query language + * is best used for human entered text rather than program generated text. + * + * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported + * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello' + * or 'world', though those that contain both will rank higher in the results. + * + * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can + * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding + * wildcards will increase the number of documents that will be found but can also have a negative + * impact on query performance, especially with wildcards at the beginning of a term. + * + * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term + * hello in the title field will match this query. Using a field not present in the index will lead + * to an error being thrown. + * + * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term + * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported + * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2. + * Avoid large values for edit distance to improve query performance. + * + * Each term also supports a presence modifier. By default a term's presence in document is optional, however + * this can be changed to either required or prohibited. For a term's presence to be required in a document the + * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and + * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not + * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'. + * + * To escape special characters the backslash character '\' can be used, this allows searches to include + * characters that would normally be considered modifiers, e.g. `foo\~2` will search for a term "foo~2" instead + * of attempting to apply a boost of 2 to the search term "foo". + * + * @typedef {string} lunr.Index~QueryString + * @example Simple single term query + * hello + * @example Multiple term query + * hello world + * @example term scoped to a field + * title:hello + * @example term with a boost of 10 + * hello^10 + * @example term with an edit distance of 2 + * hello~2 + * @example terms with presence modifiers + * -foo +bar baz + */ + +/** + * Performs a search against the index using lunr query syntax. + * + * Results will be returned sorted by their score, the most relevant results + * will be returned first. For details on how the score is calculated, please see + * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}. + * + * For more programmatic querying use lunr.Index#query. + * + * @param {lunr.Index~QueryString} queryString - A string containing a lunr query. + * @throws {lunr.QueryParseError} If the passed query string cannot be parsed. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.search = function (queryString) { + return this.query(function (query) { + var parser = new lunr.QueryParser(queryString, query) + parser.parse() + }) +} + +/** + * A query builder callback provides a query object to be used to express + * the query to perform on the index. + * + * @callback lunr.Index~queryBuilder + * @param {lunr.Query} query - The query object to build up. + * @this lunr.Query + */ + +/** + * Performs a query against the index using the yielded lunr.Query object. + * + * If performing programmatic queries against the index, this method is preferred + * over lunr.Index#search so as to avoid the additional query parsing overhead. + * + * A query object is yielded to the supplied function which should be used to + * express the query to be run against the index. + * + * Note that although this function takes a callback parameter it is _not_ an + * asynchronous operation, the callback is just yielded a query object to be + * customized. + * + * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.query = function (fn) { + // for each query clause + // * process terms + // * expand terms from token set + // * find matching documents and metadata + // * get document vectors + // * score documents + + var query = new lunr.Query(this.fields), + matchingFields = Object.create(null), + queryVectors = Object.create(null), + termFieldCache = Object.create(null), + requiredMatches = Object.create(null), + prohibitedMatches = Object.create(null) + + /* + * To support field level boosts a query vector is created per + * field. An empty vector is eagerly created to support negated + * queries. + */ + for (var i = 0; i < this.fields.length; i++) { + queryVectors[this.fields[i]] = new lunr.Vector + } + + fn.call(query, query) + + for (var i = 0; i < query.clauses.length; i++) { + /* + * Unless the pipeline has been disabled for this term, which is + * the case for terms with wildcards, we need to pass the clause + * term through the search pipeline. A pipeline returns an array + * of processed terms. Pipeline functions may expand the passed + * term, which means we may end up performing multiple index lookups + * for a single query term. + */ + var clause = query.clauses[i], + terms = null, + clauseMatches = lunr.Set.empty + + if (clause.usePipeline) { + terms = this.pipeline.runString(clause.term, { + fields: clause.fields + }) + } else { + terms = [clause.term] + } + + for (var m = 0; m < terms.length; m++) { + var term = terms[m] + + /* + * Each term returned from the pipeline needs to use the same query + * clause object, e.g. the same boost and or edit distance. The + * simplest way to do this is to re-use the clause object but mutate + * its term property. + */ + clause.term = term + + /* + * From the term in the clause we create a token set which will then + * be used to intersect the indexes token set to get a list of terms + * to lookup in the inverted index + */ + var termTokenSet = lunr.TokenSet.fromClause(clause), + expandedTerms = this.tokenSet.intersect(termTokenSet).toArray() + + /* + * If a term marked as required does not exist in the tokenSet it is + * impossible for the search to return any matches. We set all the field + * scoped required matches set to empty and stop examining any further + * clauses. + */ + if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = lunr.Set.empty + } + + break + } + + for (var j = 0; j < expandedTerms.length; j++) { + /* + * For each term get the posting and termIndex, this is required for + * building the query vector. + */ + var expandedTerm = expandedTerms[j], + posting = this.invertedIndex[expandedTerm], + termIndex = posting._index + + for (var k = 0; k < clause.fields.length; k++) { + /* + * For each field that this query term is scoped by (by default + * all fields are in scope) we need to get all the document refs + * that have this term in that field. + * + * The posting is the entry in the invertedIndex for the matching + * term from above. + */ + var field = clause.fields[k], + fieldPosting = posting[field], + matchingDocumentRefs = Object.keys(fieldPosting), + termField = expandedTerm + "/" + field, + matchingDocumentsSet = new lunr.Set(matchingDocumentRefs) + + /* + * if the presence of this term is required ensure that the matching + * documents are added to the set of required matches for this clause. + * + */ + if (clause.presence == lunr.Query.presence.REQUIRED) { + clauseMatches = clauseMatches.union(matchingDocumentsSet) + + if (requiredMatches[field] === undefined) { + requiredMatches[field] = lunr.Set.complete + } + } + + /* + * if the presence of this term is prohibited ensure that the matching + * documents are added to the set of prohibited matches for this field, + * creating that set if it does not yet exist. + */ + if (clause.presence == lunr.Query.presence.PROHIBITED) { + if (prohibitedMatches[field] === undefined) { + prohibitedMatches[field] = lunr.Set.empty + } + + prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet) + + /* + * Prohibited matches should not be part of the query vector used for + * similarity scoring and no metadata should be extracted so we continue + * to the next field + */ + continue + } + + /* + * The query field vector is populated using the termIndex found for + * the term and a unit value with the appropriate boost applied. + * Using upsert because there could already be an entry in the vector + * for the term we are working with. In that case we just add the scores + * together. + */ + queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b }) + + /** + * If we've already seen this term, field combo then we've already collected + * the matching documents and metadata, no need to go through all that again + */ + if (termFieldCache[termField]) { + continue + } + + for (var l = 0; l < matchingDocumentRefs.length; l++) { + /* + * All metadata for this term/field/document triple + * are then extracted and collected into an instance + * of lunr.MatchData ready to be returned in the query + * results + */ + var matchingDocumentRef = matchingDocumentRefs[l], + matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field), + metadata = fieldPosting[matchingDocumentRef], + fieldMatch + + if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) { + matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata) + } else { + fieldMatch.add(expandedTerm, field, metadata) + } + + } + + termFieldCache[termField] = true + } + } + } + + /** + * If the presence was required we need to update the requiredMatches field sets. + * We do this after all fields for the term have collected their matches because + * the clause terms presence is required in _any_ of the fields not _all_ of the + * fields. + */ + if (clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = requiredMatches[field].intersect(clauseMatches) + } + } + } + + /** + * Need to combine the field scoped required and prohibited + * matching documents into a global set of required and prohibited + * matches + */ + var allRequiredMatches = lunr.Set.complete, + allProhibitedMatches = lunr.Set.empty + + for (var i = 0; i < this.fields.length; i++) { + var field = this.fields[i] + + if (requiredMatches[field]) { + allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field]) + } + + if (prohibitedMatches[field]) { + allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field]) + } + } + + var matchingFieldRefs = Object.keys(matchingFields), + results = [], + matches = Object.create(null) + + /* + * If the query is negated (contains only prohibited terms) + * we need to get _all_ fieldRefs currently existing in the + * index. This is only done when we know that the query is + * entirely prohibited terms to avoid any cost of getting all + * fieldRefs unnecessarily. + * + * Additionally, blank MatchData must be created to correctly + * populate the results. + */ + if (query.isNegated()) { + matchingFieldRefs = Object.keys(this.fieldVectors) + + for (var i = 0; i < matchingFieldRefs.length; i++) { + var matchingFieldRef = matchingFieldRefs[i] + var fieldRef = lunr.FieldRef.fromString(matchingFieldRef) + matchingFields[matchingFieldRef] = new lunr.MatchData + } + } + + for (var i = 0; i < matchingFieldRefs.length; i++) { + /* + * Currently we have document fields that match the query, but we + * need to return documents. The matchData and scores are combined + * from multiple fields belonging to the same document. + * + * Scores are calculated by field, using the query vectors created + * above, and combined into a final document score using addition. + */ + var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]), + docRef = fieldRef.docRef + + if (!allRequiredMatches.contains(docRef)) { + continue + } + + if (allProhibitedMatches.contains(docRef)) { + continue + } + + var fieldVector = this.fieldVectors[fieldRef], + score = queryVectors[fieldRef.fieldName].similarity(fieldVector), + docMatch + + if ((docMatch = matches[docRef]) !== undefined) { + docMatch.score += score + docMatch.matchData.combine(matchingFields[fieldRef]) + } else { + var match = { + ref: docRef, + score: score, + matchData: matchingFields[fieldRef] + } + matches[docRef] = match + results.push(match) + } + } + + /* + * Sort the results objects by score, highest first. + */ + return results.sort(function (a, b) { + return b.score - a.score + }) +} + +/** + * Prepares the index for JSON serialization. + * + * The schema for this JSON blob will be described in a + * separate JSON schema file. + * + * @returns {Object} + */ +lunr.Index.prototype.toJSON = function () { + var invertedIndex = Object.keys(this.invertedIndex) + .sort() + .map(function (term) { + return [term, this.invertedIndex[term]] + }, this) + + var fieldVectors = Object.keys(this.fieldVectors) + .map(function (ref) { + return [ref, this.fieldVectors[ref].toJSON()] + }, this) + + return { + version: lunr.version, + fields: this.fields, + fieldVectors: fieldVectors, + invertedIndex: invertedIndex, + pipeline: this.pipeline.toJSON() + } +} + +/** + * Loads a previously serialized lunr.Index + * + * @param {Object} serializedIndex - A previously serialized lunr.Index + * @returns {lunr.Index} + */ +lunr.Index.load = function (serializedIndex) { + var attrs = {}, + fieldVectors = {}, + serializedVectors = serializedIndex.fieldVectors, + invertedIndex = Object.create(null), + serializedInvertedIndex = serializedIndex.invertedIndex, + tokenSetBuilder = new lunr.TokenSet.Builder, + pipeline = lunr.Pipeline.load(serializedIndex.pipeline) + + if (serializedIndex.version != lunr.version) { + lunr.utils.warn("Version mismatch when loading serialised index. Current version of lunr '" + lunr.version + "' does not match serialized index '" + serializedIndex.version + "'") + } + + for (var i = 0; i < serializedVectors.length; i++) { + var tuple = serializedVectors[i], + ref = tuple[0], + elements = tuple[1] + + fieldVectors[ref] = new lunr.Vector(elements) + } + + for (var i = 0; i < serializedInvertedIndex.length; i++) { + var tuple = serializedInvertedIndex[i], + term = tuple[0], + posting = tuple[1] + + tokenSetBuilder.insert(term) + invertedIndex[term] = posting + } + + tokenSetBuilder.finish() + + attrs.fields = serializedIndex.fields + + attrs.fieldVectors = fieldVectors + attrs.invertedIndex = invertedIndex + attrs.tokenSet = tokenSetBuilder.root + attrs.pipeline = pipeline + + return new lunr.Index(attrs) +} +/*! + * lunr.Builder + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.Builder performs indexing on a set of documents and + * returns instances of lunr.Index ready for querying. + * + * All configuration of the index is done via the builder, the + * fields to index, the document reference, the text processing + * pipeline and document scoring parameters are all set on the + * builder before indexing. + * + * @constructor + * @property {string} _ref - Internal reference to the document reference field. + * @property {string[]} _fields - Internal reference to the document fields to index. + * @property {object} invertedIndex - The inverted index maps terms to document fields. + * @property {object} documentTermFrequencies - Keeps track of document term frequencies. + * @property {object} documentLengths - Keeps track of the length of documents added to the index. + * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing. + * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing. + * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index. + * @property {number} documentCount - Keeps track of the total number of documents indexed. + * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75. + * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2. + * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space. + * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index. + */ +lunr.Builder = function () { + this._ref = "id" + this._fields = Object.create(null) + this._documents = Object.create(null) + this.invertedIndex = Object.create(null) + this.fieldTermFrequencies = {} + this.fieldLengths = {} + this.tokenizer = lunr.tokenizer + this.pipeline = new lunr.Pipeline + this.searchPipeline = new lunr.Pipeline + this.documentCount = 0 + this._b = 0.75 + this._k1 = 1.2 + this.termIndex = 0 + this.metadataWhitelist = [] +} + +/** + * Sets the document field used as the document reference. Every document must have this field. + * The type of this field in the document should be a string, if it is not a string it will be + * coerced into a string by calling toString. + * + * The default ref is 'id'. + * + * The ref should _not_ be changed during indexing, it should be set before any documents are + * added to the index. Changing it during indexing can lead to inconsistent results. + * + * @param {string} ref - The name of the reference field in the document. + */ +lunr.Builder.prototype.ref = function (ref) { + this._ref = ref +} + +/** + * A function that is used to extract a field from a document. + * + * Lunr expects a field to be at the top level of a document, if however the field + * is deeply nested within a document an extractor function can be used to extract + * the right field for indexing. + * + * @callback fieldExtractor + * @param {object} doc - The document being added to the index. + * @returns {?(string|object|object[])} obj - The object that will be indexed for this field. + * @example Extracting a nested field + * function (doc) { return doc.nested.field } + */ + +/** + * Adds a field to the list of document fields that will be indexed. Every document being + * indexed should have this field. Null values for this field in indexed documents will + * not cause errors but will limit the chance of that document being retrieved by searches. + * + * All fields should be added before adding documents to the index. Adding fields after + * a document has been indexed will have no effect on already indexed documents. + * + * Fields can be boosted at build time. This allows terms within that field to have more + * importance when ranking search results. Use a field boost to specify that matches within + * one field are more important than other fields. + * + * @param {string} fieldName - The name of a field to index in all documents. + * @param {object} attributes - Optional attributes associated with this field. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this field. + * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document. + * @throws {RangeError} fieldName cannot contain unsupported characters '/' + */ +lunr.Builder.prototype.field = function (fieldName, attributes) { + if (/\//.test(fieldName)) { + throw new RangeError ("Field '" + fieldName + "' contains illegal character '/'") + } + + this._fields[fieldName] = attributes || {} +} + +/** + * A parameter to tune the amount of field length normalisation that is applied when + * calculating relevance scores. A value of 0 will completely disable any normalisation + * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b + * will be clamped to the range 0 - 1. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.b = function (number) { + if (number < 0) { + this._b = 0 + } else if (number > 1) { + this._b = 1 + } else { + this._b = number + } +} + +/** + * A parameter that controls the speed at which a rise in term frequency results in term + * frequency saturation. The default value is 1.2. Setting this to a higher value will give + * slower saturation levels, a lower value will result in quicker saturation. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.k1 = function (number) { + this._k1 = number +} + +/** + * Adds a document to the index. + * + * Before adding fields to the index the index should have been fully setup, with the document + * ref and all fields to index already having been specified. + * + * The document must have a field name as specified by the ref (by default this is 'id') and + * it should have all fields defined for indexing, though null or undefined values will not + * cause errors. + * + * Entire documents can be boosted at build time. Applying a boost to a document indicates that + * this document should rank higher in search results than other documents. + * + * @param {object} doc - The document to add to the index. + * @param {object} attributes - Optional attributes associated with this document. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this document. + */ +lunr.Builder.prototype.add = function (doc, attributes) { + var docRef = doc[this._ref], + fields = Object.keys(this._fields) + + this._documents[docRef] = attributes || {} + this.documentCount += 1 + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i], + extractor = this._fields[fieldName].extractor, + field = extractor ? extractor(doc) : doc[fieldName], + tokens = this.tokenizer(field, { + fields: [fieldName] + }), + terms = this.pipeline.run(tokens), + fieldRef = new lunr.FieldRef (docRef, fieldName), + fieldTerms = Object.create(null) + + this.fieldTermFrequencies[fieldRef] = fieldTerms + this.fieldLengths[fieldRef] = 0 + + // store the length of this field for this document + this.fieldLengths[fieldRef] += terms.length + + // calculate term frequencies for this field + for (var j = 0; j < terms.length; j++) { + var term = terms[j] + + if (fieldTerms[term] == undefined) { + fieldTerms[term] = 0 + } + + fieldTerms[term] += 1 + + // add to inverted index + // create an initial posting if one doesn't exist + if (this.invertedIndex[term] == undefined) { + var posting = Object.create(null) + posting["_index"] = this.termIndex + this.termIndex += 1 + + for (var k = 0; k < fields.length; k++) { + posting[fields[k]] = Object.create(null) + } + + this.invertedIndex[term] = posting + } + + // add an entry for this term/fieldName/docRef to the invertedIndex + if (this.invertedIndex[term][fieldName][docRef] == undefined) { + this.invertedIndex[term][fieldName][docRef] = Object.create(null) + } + + // store all whitelisted metadata about this token in the + // inverted index + for (var l = 0; l < this.metadataWhitelist.length; l++) { + var metadataKey = this.metadataWhitelist[l], + metadata = term.metadata[metadataKey] + + if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) { + this.invertedIndex[term][fieldName][docRef][metadataKey] = [] + } + + this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata) + } + } + + } +} + +/** + * Calculates the average document length for this index + * + * @private + */ +lunr.Builder.prototype.calculateAverageFieldLengths = function () { + + var fieldRefs = Object.keys(this.fieldLengths), + numberOfFields = fieldRefs.length, + accumulator = {}, + documentsWithField = {} + + for (var i = 0; i < numberOfFields; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + field = fieldRef.fieldName + + documentsWithField[field] || (documentsWithField[field] = 0) + documentsWithField[field] += 1 + + accumulator[field] || (accumulator[field] = 0) + accumulator[field] += this.fieldLengths[fieldRef] + } + + var fields = Object.keys(this._fields) + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i] + accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName] + } + + this.averageFieldLength = accumulator +} + +/** + * Builds a vector space model of every document using lunr.Vector + * + * @private + */ +lunr.Builder.prototype.createFieldVectors = function () { + var fieldVectors = {}, + fieldRefs = Object.keys(this.fieldTermFrequencies), + fieldRefsLength = fieldRefs.length, + termIdfCache = Object.create(null) + + for (var i = 0; i < fieldRefsLength; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + fieldName = fieldRef.fieldName, + fieldLength = this.fieldLengths[fieldRef], + fieldVector = new lunr.Vector, + termFrequencies = this.fieldTermFrequencies[fieldRef], + terms = Object.keys(termFrequencies), + termsLength = terms.length + + + var fieldBoost = this._fields[fieldName].boost || 1, + docBoost = this._documents[fieldRef.docRef].boost || 1 + + for (var j = 0; j < termsLength; j++) { + var term = terms[j], + tf = termFrequencies[term], + termIndex = this.invertedIndex[term]._index, + idf, score, scoreWithPrecision + + if (termIdfCache[term] === undefined) { + idf = lunr.idf(this.invertedIndex[term], this.documentCount) + termIdfCache[term] = idf + } else { + idf = termIdfCache[term] + } + + score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf) + score *= fieldBoost + score *= docBoost + scoreWithPrecision = Math.round(score * 1000) / 1000 + // Converts 1.23456789 to 1.234. + // Reducing the precision so that the vectors take up less + // space when serialised. Doing it now so that they behave + // the same before and after serialisation. Also, this is + // the fastest approach to reducing a number's precision in + // JavaScript. + + fieldVector.insert(termIndex, scoreWithPrecision) + } + + fieldVectors[fieldRef] = fieldVector + } + + this.fieldVectors = fieldVectors +} + +/** + * Creates a token set of all tokens in the index using lunr.TokenSet + * + * @private + */ +lunr.Builder.prototype.createTokenSet = function () { + this.tokenSet = lunr.TokenSet.fromArray( + Object.keys(this.invertedIndex).sort() + ) +} + +/** + * Builds the index, creating an instance of lunr.Index. + * + * This completes the indexing process and should only be called + * once all documents have been added to the index. + * + * @returns {lunr.Index} + */ +lunr.Builder.prototype.build = function () { + this.calculateAverageFieldLengths() + this.createFieldVectors() + this.createTokenSet() + + return new lunr.Index({ + invertedIndex: this.invertedIndex, + fieldVectors: this.fieldVectors, + tokenSet: this.tokenSet, + fields: Object.keys(this._fields), + pipeline: this.searchPipeline + }) +} + +/** + * Applies a plugin to the index builder. + * + * A plugin is a function that is called with the index builder as its context. + * Plugins can be used to customise or extend the behaviour of the index + * in some way. A plugin is just a function, that encapsulated the custom + * behaviour that should be applied when building the index. + * + * The plugin function will be called with the index builder as its argument, additional + * arguments can also be passed when calling use. The function will be called + * with the index builder as its context. + * + * @param {Function} plugin The plugin to apply. + */ +lunr.Builder.prototype.use = function (fn) { + var args = Array.prototype.slice.call(arguments, 1) + args.unshift(this) + fn.apply(this, args) +} +/** + * Contains and collects metadata about a matching document. + * A single instance of lunr.MatchData is returned as part of every + * lunr.Index~Result. + * + * @constructor + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + * @property {object} metadata - A cloned collection of metadata associated with this document. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData = function (term, field, metadata) { + var clonedMetadata = Object.create(null), + metadataKeys = Object.keys(metadata || {}) + + // Cloning the metadata to prevent the original + // being mutated during match data combination. + // Metadata is kept in an array within the inverted + // index so cloning the data can be done with + // Array#slice + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + clonedMetadata[key] = metadata[key].slice() + } + + this.metadata = Object.create(null) + + if (term !== undefined) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = clonedMetadata + } +} + +/** + * An instance of lunr.MatchData will be created for every term that matches a + * document. However only one instance is required in a lunr.Index~Result. This + * method combines metadata from another instance of lunr.MatchData with this + * objects metadata. + * + * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData.prototype.combine = function (otherMatchData) { + var terms = Object.keys(otherMatchData.metadata) + + for (var i = 0; i < terms.length; i++) { + var term = terms[i], + fields = Object.keys(otherMatchData.metadata[term]) + + if (this.metadata[term] == undefined) { + this.metadata[term] = Object.create(null) + } + + for (var j = 0; j < fields.length; j++) { + var field = fields[j], + keys = Object.keys(otherMatchData.metadata[term][field]) + + if (this.metadata[term][field] == undefined) { + this.metadata[term][field] = Object.create(null) + } + + for (var k = 0; k < keys.length; k++) { + var key = keys[k] + + if (this.metadata[term][field][key] == undefined) { + this.metadata[term][field][key] = otherMatchData.metadata[term][field][key] + } else { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key]) + } + + } + } + } +} + +/** + * Add metadata for a term/field pair to this instance of match data. + * + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + */ +lunr.MatchData.prototype.add = function (term, field, metadata) { + if (!(term in this.metadata)) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = metadata + return + } + + if (!(field in this.metadata[term])) { + this.metadata[term][field] = metadata + return + } + + var metadataKeys = Object.keys(metadata) + + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + + if (key in this.metadata[term][field]) { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key]) + } else { + this.metadata[term][field][key] = metadata[key] + } + } +} +/** + * A lunr.Query provides a programmatic way of defining queries to be performed + * against a {@link lunr.Index}. + * + * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method + * so the query object is pre-initialized with the right index fields. + * + * @constructor + * @property {lunr.Query~Clause[]} clauses - An array of query clauses. + * @property {string[]} allFields - An array of all available fields in a lunr.Index. + */ +lunr.Query = function (allFields) { + this.clauses = [] + this.allFields = allFields +} + +/** + * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause. + * + * This allows wildcards to be added to the beginning and end of a term without having to manually do any string + * concatenation. + * + * The wildcard constants can be bitwise combined to select both leading and trailing wildcards. + * + * @constant + * @default + * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour + * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists + * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with trailing wildcard + * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING }) + * @example query term with leading and trailing wildcard + * query.term('foo', { + * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING + * }) + */ + +lunr.Query.wildcard = new String ("*") +lunr.Query.wildcard.NONE = 0 +lunr.Query.wildcard.LEADING = 1 +lunr.Query.wildcard.TRAILING = 2 + +/** + * Constants for indicating what kind of presence a term must have in matching documents. + * + * @constant + * @enum {number} + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with required presence + * query.term('foo', { presence: lunr.Query.presence.REQUIRED }) + */ +lunr.Query.presence = { + /** + * Term's presence in a document is optional, this is the default value. + */ + OPTIONAL: 1, + + /** + * Term's presence in a document is required, documents that do not contain + * this term will not be returned. + */ + REQUIRED: 2, + + /** + * Term's presence in a document is prohibited, documents that do contain + * this term will not be returned. + */ + PROHIBITED: 3 +} + +/** + * A single clause in a {@link lunr.Query} contains a term and details on how to + * match that term against a {@link lunr.Index}. + * + * @typedef {Object} lunr.Query~Clause + * @property {string[]} fields - The fields in an index this clause should be matched against. + * @property {number} [boost=1] - Any boost that should be applied when matching this clause. + * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be. + * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline. + * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended. + * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents. + */ + +/** + * Adds a {@link lunr.Query~Clause} to this query. + * + * Unless the clause contains the fields to be matched all fields will be matched. In addition + * a default boost of 1 is applied to the clause. + * + * @param {lunr.Query~Clause} clause - The clause to add to this query. + * @see lunr.Query~Clause + * @returns {lunr.Query} + */ +lunr.Query.prototype.clause = function (clause) { + if (!('fields' in clause)) { + clause.fields = this.allFields + } + + if (!('boost' in clause)) { + clause.boost = 1 + } + + if (!('usePipeline' in clause)) { + clause.usePipeline = true + } + + if (!('wildcard' in clause)) { + clause.wildcard = lunr.Query.wildcard.NONE + } + + if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) { + clause.term = "*" + clause.term + } + + if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) { + clause.term = "" + clause.term + "*" + } + + if (!('presence' in clause)) { + clause.presence = lunr.Query.presence.OPTIONAL + } + + this.clauses.push(clause) + + return this +} + +/** + * A negated query is one in which every clause has a presence of + * prohibited. These queries require some special processing to return + * the expected results. + * + * @returns boolean + */ +lunr.Query.prototype.isNegated = function () { + for (var i = 0; i < this.clauses.length; i++) { + if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) { + return false + } + } + + return true +} + +/** + * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause} + * to the list of clauses that make up this query. + * + * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion + * to a token or token-like string should be done before calling this method. + * + * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an + * array, each term in the array will share the same options. + * + * @param {object|object[]} term - The term(s) to add to the query. + * @param {object} [options] - Any additional properties to add to the query clause. + * @returns {lunr.Query} + * @see lunr.Query#clause + * @see lunr.Query~Clause + * @example adding a single term to a query + * query.term("foo") + * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard + * query.term("foo", { + * fields: ["title"], + * boost: 10, + * wildcard: lunr.Query.wildcard.TRAILING + * }) + * @example using lunr.tokenizer to convert a string to tokens before using them as terms + * query.term(lunr.tokenizer("foo bar")) + */ +lunr.Query.prototype.term = function (term, options) { + if (Array.isArray(term)) { + term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this) + return this + } + + var clause = options || {} + clause.term = term.toString() + + this.clause(clause) + + return this +} +lunr.QueryParseError = function (message, start, end) { + this.name = "QueryParseError" + this.message = message + this.start = start + this.end = end +} + +lunr.QueryParseError.prototype = new Error +lunr.QueryLexer = function (str) { + this.lexemes = [] + this.str = str + this.length = str.length + this.pos = 0 + this.start = 0 + this.escapeCharPositions = [] +} + +lunr.QueryLexer.prototype.run = function () { + var state = lunr.QueryLexer.lexText + + while (state) { + state = state(this) + } +} + +lunr.QueryLexer.prototype.sliceString = function () { + var subSlices = [], + sliceStart = this.start, + sliceEnd = this.pos + + for (var i = 0; i < this.escapeCharPositions.length; i++) { + sliceEnd = this.escapeCharPositions[i] + subSlices.push(this.str.slice(sliceStart, sliceEnd)) + sliceStart = sliceEnd + 1 + } + + subSlices.push(this.str.slice(sliceStart, this.pos)) + this.escapeCharPositions.length = 0 + + return subSlices.join('') +} + +lunr.QueryLexer.prototype.emit = function (type) { + this.lexemes.push({ + type: type, + str: this.sliceString(), + start: this.start, + end: this.pos + }) + + this.start = this.pos +} + +lunr.QueryLexer.prototype.escapeCharacter = function () { + this.escapeCharPositions.push(this.pos - 1) + this.pos += 1 +} + +lunr.QueryLexer.prototype.next = function () { + if (this.pos >= this.length) { + return lunr.QueryLexer.EOS + } + + var char = this.str.charAt(this.pos) + this.pos += 1 + return char +} + +lunr.QueryLexer.prototype.width = function () { + return this.pos - this.start +} + +lunr.QueryLexer.prototype.ignore = function () { + if (this.start == this.pos) { + this.pos += 1 + } + + this.start = this.pos +} + +lunr.QueryLexer.prototype.backup = function () { + this.pos -= 1 +} + +lunr.QueryLexer.prototype.acceptDigitRun = function () { + var char, charCode + + do { + char = this.next() + charCode = char.charCodeAt(0) + } while (charCode > 47 && charCode < 58) + + if (char != lunr.QueryLexer.EOS) { + this.backup() + } +} + +lunr.QueryLexer.prototype.more = function () { + return this.pos < this.length +} + +lunr.QueryLexer.EOS = 'EOS' +lunr.QueryLexer.FIELD = 'FIELD' +lunr.QueryLexer.TERM = 'TERM' +lunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE' +lunr.QueryLexer.BOOST = 'BOOST' +lunr.QueryLexer.PRESENCE = 'PRESENCE' + +lunr.QueryLexer.lexField = function (lexer) { + lexer.backup() + lexer.emit(lunr.QueryLexer.FIELD) + lexer.ignore() + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexTerm = function (lexer) { + if (lexer.width() > 1) { + lexer.backup() + lexer.emit(lunr.QueryLexer.TERM) + } + + lexer.ignore() + + if (lexer.more()) { + return lunr.QueryLexer.lexText + } +} + +lunr.QueryLexer.lexEditDistance = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.EDIT_DISTANCE) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexBoost = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.BOOST) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexEOS = function (lexer) { + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } +} + +// This matches the separator used when tokenising fields +// within a document. These should match otherwise it is +// not possible to search for some tokens within a document. +// +// It is possible for the user to change the separator on the +// tokenizer so it _might_ clash with any other of the special +// characters already used within the search string, e.g. :. +// +// This means that it is possible to change the separator in +// such a way that makes some words unsearchable using a search +// string. +lunr.QueryLexer.termSeparator = lunr.tokenizer.separator + +lunr.QueryLexer.lexText = function (lexer) { + while (true) { + var char = lexer.next() + + if (char == lunr.QueryLexer.EOS) { + return lunr.QueryLexer.lexEOS + } + + // Escape character is '\' + if (char.charCodeAt(0) == 92) { + lexer.escapeCharacter() + continue + } + + if (char == ":") { + return lunr.QueryLexer.lexField + } + + if (char == "~") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexEditDistance + } + + if (char == "^") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexBoost + } + + // "+" indicates term presence is required + // checking for length to ensure that only + // leading "+" are considered + if (char == "+" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + // "-" indicates term presence is prohibited + // checking for length to ensure that only + // leading "-" are considered + if (char == "-" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + if (char.match(lunr.QueryLexer.termSeparator)) { + return lunr.QueryLexer.lexTerm + } + } +} + +lunr.QueryParser = function (str, query) { + this.lexer = new lunr.QueryLexer (str) + this.query = query + this.currentClause = {} + this.lexemeIdx = 0 +} + +lunr.QueryParser.prototype.parse = function () { + this.lexer.run() + this.lexemes = this.lexer.lexemes + + var state = lunr.QueryParser.parseClause + + while (state) { + state = state(this) + } + + return this.query +} + +lunr.QueryParser.prototype.peekLexeme = function () { + return this.lexemes[this.lexemeIdx] +} + +lunr.QueryParser.prototype.consumeLexeme = function () { + var lexeme = this.peekLexeme() + this.lexemeIdx += 1 + return lexeme +} + +lunr.QueryParser.prototype.nextClause = function () { + var completedClause = this.currentClause + this.query.clause(completedClause) + this.currentClause = {} +} + +lunr.QueryParser.parseClause = function (parser) { + var lexeme = parser.peekLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.type) { + case lunr.QueryLexer.PRESENCE: + return lunr.QueryParser.parsePresence + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expected either a field or a term, found " + lexeme.type + + if (lexeme.str.length >= 1) { + errorMessage += " with value '" + lexeme.str + "'" + } + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } +} + +lunr.QueryParser.parsePresence = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.str) { + case "-": + parser.currentClause.presence = lunr.Query.presence.PROHIBITED + break + case "+": + parser.currentClause.presence = lunr.Query.presence.REQUIRED + break + default: + var errorMessage = "unrecognised presence operator'" + lexeme.str + "'" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term or field, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term or field, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseField = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + if (parser.query.allFields.indexOf(lexeme.str) == -1) { + var possibleFields = parser.query.allFields.map(function (f) { return "'" + f + "'" }).join(', '), + errorMessage = "unrecognised field '" + lexeme.str + "', possible fields: " + possibleFields + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.fields = [lexeme.str] + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseTerm = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + parser.currentClause.term = lexeme.str.toLowerCase() + + if (lexeme.str.indexOf("*") != -1) { + parser.currentClause.usePipeline = false + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseEditDistance = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var editDistance = parseInt(lexeme.str, 10) + + if (isNaN(editDistance)) { + var errorMessage = "edit distance must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.editDistance = editDistance + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseBoost = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var boost = parseInt(lexeme.str, 10) + + if (isNaN(boost)) { + var errorMessage = "boost must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.boost = boost + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + + /** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ + ;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like enviroments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + root.lunr = factory() + } + }(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + return lunr + })) +})(); diff --git a/search/main.js b/search/main.js new file mode 100644 index 00000000..a5e469d7 --- /dev/null +++ b/search/main.js @@ -0,0 +1,109 @@ +function getSearchTermFromLocation() { + var sPageURL = window.location.search.substring(1); + var sURLVariables = sPageURL.split('&'); + for (var i = 0; i < sURLVariables.length; i++) { + var sParameterName = sURLVariables[i].split('='); + if (sParameterName[0] == 'q') { + return decodeURIComponent(sParameterName[1].replace(/\+/g, '%20')); + } + } +} + +function joinUrl (base, path) { + if (path.substring(0, 1) === "/") { + // path starts with `/`. Thus it is absolute. + return path; + } + if (base.substring(base.length-1) === "/") { + // base ends with `/` + return base + path; + } + return base + "/" + path; +} + +function escapeHtml (value) { + return value.replace(/&/g, '&') + .replace(/"/g, '"') + .replace(//g, '>'); +} + +function formatResult (location, title, summary) { + return ''; +} + +function displayResults (results) { + var search_results = document.getElementById("mkdocs-search-results"); + while (search_results.firstChild) { + search_results.removeChild(search_results.firstChild); + } + if (results.length > 0){ + for (var i=0; i < results.length; i++){ + var result = results[i]; + var html = formatResult(result.location, result.title, result.summary); + search_results.insertAdjacentHTML('beforeend', html); + } + } else { + var noResultsText = search_results.getAttribute('data-no-results-text'); + if (!noResultsText) { + noResultsText = "No results found"; + } + search_results.insertAdjacentHTML('beforeend', '

' + noResultsText + '

'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..da6cda2a --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"index.html","text":"HPE Storage Container Orchestrator Documentation \u00b6 This is an umbrella documentation project for the HPE CSI Driver for Kubernetes and neighboring ecosystems for HPE primary storage including HPE Alletra Storage MP, Alletra 9000, Alletra 5000/6000, Nimble Storage, Primera and 3PAR storage systems. The documentation is tailored for IT Ops, developers and technology partners. Use the navigation on the left-hand side to explore the different topics. Feel free to contribute to this project but please read the contributing guidelines . Use the navigation to the left. Not sure what you're looking for? \u2192 Get started ! Did you know? SCOD is \"docs\" in reverse?","title":"Home"},{"location":"index.html#hpe_storage_container_orchestrator_documentation","text":"This is an umbrella documentation project for the HPE CSI Driver for Kubernetes and neighboring ecosystems for HPE primary storage including HPE Alletra Storage MP, Alletra 9000, Alletra 5000/6000, Nimble Storage, Primera and 3PAR storage systems. The documentation is tailored for IT Ops, developers and technology partners. Use the navigation on the left-hand side to explore the different topics. Feel free to contribute to this project but please read the contributing guidelines . Use the navigation to the left. Not sure what you're looking for? \u2192 Get started ! Did you know? SCOD is \"docs\" in reverse?","title":"HPE Storage Container Orchestrator Documentation"},{"location":"container_storage_provider/index.html","text":"Container Storage Providers \u00b6 HPE Alletra 5000/6000 and Nimble Storage HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR","title":"Container Storage Providers"},{"location":"container_storage_provider/index.html#container_storage_providers","text":"HPE Alletra 5000/6000 and Nimble Storage HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR","title":"Container Storage Providers"},{"location":"container_storage_provider/hpe_alletra_6000/index.html","text":"Introduction \u00b6 The HPE Alletra 5000/6000 and Nimble Storage Container Storage Provider (\"CSP\") for Kubernetes is the reference implementation for the HPE CSI Driver for Kubernetes. The CSP abstracts the data management capabilities of the array for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass and VolumeSnapshotClass parameters but also contains important array setup requirements. Important For a successful deployment, it's important to understand the array platform requirements found within the CSI driver (compute node OS and Kubernetes versions) and the CSP. Introduction Platform Requirements Setting Up the Array Single Tenant Deployment Multitenant Deployment Tenant Limitations Limitations StorageClass Parameters Common Parameters for Provisioning and Cloning Provisioning Parameters Cloning Parameters Import Parameters Pod Inline Volume Parameters (Local Ephemeral Volumes) VolumeGroupClass Parameters VolumeSnapshotClass Parameters Static Provisioning Persistent Volume Persistent Volume Claim Seealso There's a brief introduction on how to use HPE Nimble Storage with the HPE CSI Driver in the Video Gallery. It also applies broadly to HPE Alletra 5000/6000. Platform Requirements \u00b6 Always check the corresponding CSI driver version in compatibility and support for the required array Operating System (\"OS\") version for a particular release of the driver. If a certain feature is gated against a certain version of the array OS it will be called out where applicable. Tip The documentation reflected here always corresponds to the latest supported version and may contain references to future features and capabilities. Setting Up the Array \u00b6 How to deploy an HPE storage array is beyond the scope of this document. Please refer to HPE InfoSight for further reading. Important The HPE Nimble Storage Linux Toolkit (NLT) is not compatible with the HPE CSI Driver for Kubernetes. Do not install NLT on Kubernetes compute nodes. It may be installed on Kubernetes control plane nodes if they use iSCSI or FC storage from the array. Single Tenant Deployment \u00b6 The CSP requires access to a user with either poweruser or the administrator role. It's recommended to use the poweruser role for least privilege practices. Tip It's highly recommended to deploy a multitenant setup. Multitenant Deployment \u00b6 In array OS 6.0.0 and newer it's possible to create separate tenants using the tenantadmin CLI to assign folders to a tenant. This creates a secure and logical separation of storage resources between Kubernetes clusters. No special configuration is needed on the Kubernetes cluster when using a tenant account or a regular user account. It's important to understand from a provisioning perspective that if the tenant account being used has been assigned multiple folders, the CSP will pick the folder with the most space available. If this is not desirable and a 1:1 StorageClass to Folder mapping is needed, the \"folder\" parameter needs to be called out in the StorageClass . For reference, as of array OS 6.0.0, this is the tenantadmin command synopsis. $ tenantadmin --help Usage: tenantadmin [options] Manage Tenants. Available options are: --help Program help. --list List Tenants. --info name Tenant info. --add tenant_name Add a tenant. --folders folders List of folder paths (comma separated pool_name:fqn) the tenant will be able to access (mandatory). --remove name Remove a tenant. --add_folder tenant_name Add a folder path for tenant access. --name folder_name Name of the folder path (pool_name:fqn) to be added (mandatory). --remove_folder tenant_name Remove a folder path from tenant access. --name folder_name Name of the folder path (pool_name:fqn) to be removed (mandatory). --passwd Change tenant's login password. --tenant name Change a specific tenant's login password (mandatory). Caution The tenantadmin command may only be run by local array OS administrators. LDAP or Active Directory accounts, regardless of role, are not supported. Visit the array admin guide on HPE InfoSight to learn more about how to use the tenantadmin CLI. Tenant Limitations \u00b6 Some features may be limited and restricted in a multitenant deployment, such as arbitrarily import volumes in folders from the array the tenant isn't a user of, here are a few less obvious limitations. CHAP is configured globally for the CSI driver. The CSI driver is contracted to create the CHAP user if it doesn't exist. It's important that the CHAP user does not exist prior when used with a tenant, as tenant may not share CHAP users among themselves or the admin account. Both port 443 and 5392 needs to be exposed to the Kubernetes cluster in multitenant deployments. Seealso An in-depth tutorial on how to use multitenancy and the tenantadmin CLI is available on HPE Developer: Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble Storage . There's also a high level overview of multitenancy available as a lightboard presentation on YouTube . Limitations \u00b6 Consult the compatibility and support table for supported array OS versions. CSI and CSP specific limitations are listed below. Striped volumes on grouped arrays are not supported by the CSI driver. The CSP is not capable of provisioning or importing volumes protected by Peer Persistence. When using an FC only array and provisioning RWX block volumes, the \"multi_initiator\" attribute won't get set properly on the volume. The workaround is to run group --edit --iscsi_enabled yes on the Array OS CLI. StorageClass Parameters \u00b6 A StorageClass is used to provision or clone a persistent volume. It can also be used to import an existing volume or clone a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. Common parameters for provisioning and cloning Provisioning Parameters Cloning Parameters Import Parameters Pod Inline Volume Parameters (Local Ephemeral Volumes) VolumeGroupClass Parameters VolumeSnapshotClass Parameters Backward compatibility with the HPE Nimble Storage FlexVolume driver is being honored to a certain degree. StorageClass API objects needs be rewritten and parameters need to be updated regardless. Please see using the HPE CSI Driver for base StorageClass examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities. Note These are optional parameters unless specified. Common Parameters for Provisioning and Cloning \u00b6 These parameters are mutable between a parent volume and creating a clone from a snapshot. Parameter String Description accessProtocol 1 Text The access protocol to use when accessing the persistent volume (\"fc\" or \"iscsi\"). Defaults to \"iscsi\" when unspecified. destroyOnDelete Boolean Indicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to \"false\" which means volumes needs to be pruned manually. limitIops Integer The IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default). limitMbps Integer The MB/s throughput limit for the volume between 1 and 4294967294, or -1 for unlimited (default). description Text Text to be added to the volume's description on the array. Empty string by default. performancePolicy 2 Text The name of the performance policy to assign to the volume. Default example performance policies include \"Backup Repository\", \"Exchange 2003 data store\", \"Exchange 2007 data store\", \"Exchange 2010 data store\", \"Exchange log\", \"Oracle OLTP\", \"Other Workloads\", \"SharePoint\", \"SQL Server\", \"SQL Server 2012\", \"SQL Server Logs\". Defaults to the \"default\" performance policy. protectionTemplate 4 Text The name of the protection template to assign to the volume. Default examples of protection templates include \"Retain-30Daily\", \"Retain-48Hourly-30Daily-52Weekly\", and \"Retain-90Daily\". folder Text The name of the folder in which to place the volume. Defaults to the root of the \"default\" pool. thick Boolean Indicates that the volume should be thick provisioned. Defaults to \"false\" dedupeEnabled 3 Boolean Indicates that the volume should enable deduplication. Defaults to \"true\" when available. syncOnDetach Boolean Indicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node. Defaults to \"false\". Restrictions applicable when using the CSI volume mutator : 1 = Parameter is immutable and can't be altered after provisioning/cloning. 2 = Performance policies may only be mutated between performance polices with the same block size. 3 = Deduplication may only be mutated within the same performance policy application category and block size. 4 = This parameter was removed in HPE CSI Driver 1.4.0 and replaced with VolumeGroupClasses . Note Performance Policies, Folders and Protection Templates are array OS specific constructs that can be created on the array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight . Provisioning Parameters \u00b6 These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes. Parameter String Description encrypted Boolean Indicates that the volume should be encrypted. Defaults to \"false\". pool Text The name of the pool in which to place the volume. Defaults to the \"default\" pool. Cloning Parameters \u00b6 Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference an array volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. Import Parameters \u00b6 Importing volumes to Kubernetes requires the source array volume to be offline. In case of reverse replication, the upstream volume should be in offline state. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE CSI Driver. Parameter String Description importVolumeName Text The name of the array volume to import. snapshot Text The name of the array snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored. takeover Boolean Indicates the current group will takeover ownership of the array volume and volume collection. This should be performed against a downstream replica. reverseReplication Boolean Reverses the replication direction so that writes to the array volume are replicated back to the group where it was replicated from. forceImport Boolean Forces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead. Seealso In this HPE Developer blog post you'll learn how to use the import parameters to lift and transform applications from traditional infrastructure to Kubernetes using the HPE CSI Driver. Pod Inline Volume Parameters (Local Ephemeral Volumes) \u00b6 These parameters are applicable only for Pod inline volumes and to be specified within Pod spec. Parameter String Description csi.storage.k8s.io/ephemeral Boolean Indicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to \"true\". inline-volume-secret-name Text A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. inline-volume-secret-namespace Text The namespace of inline-volume-secret-name for ephemeral inline volume. size Text The size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used. accessProtocol Text Storage access protocol to use, \"iscsi\" or \"fc\". Important All parameters are required for inline ephemeral volumes. VolumeGroupClass Parameters \u00b6 If basic data protection is required and performed on the array, VolumeGroups needs to be created, even it's just a single volume that needs data protection using snapshots and replication. Learn more about VolumeGroups in the provisioning concepts documentation . Parameter String Description description Text Text to be added to the volume collection description on the array. Empty by default. protectionTemplate Text The name of the protection template to assign to the volume collection. Default examples of protection templates include \"Retain-30Daily\", \"Retain-48Hourly-30Daily-52Weekly\", and \"Retain-90Daily\". Empty by default, meaning no array snapshots are performed on the VolumeGroups . New feature VolumeGroupClasses were introduced with version 1.4.0 of the CSI driver. Learn more in the Using section . VolumeSnapshotClass Parameters \u00b6 These parametes are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description description Text Text to be added to the snapshot's description on the array. writable Boolean Indicates if the snapshot is writable on the array. Defaults to \"false\". online Boolean Indicates if the snapshot is set to online on the array. Defaults to \"false\". Static Provisioning \u00b6 Static provisioning of PVs and PVCs may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass . Persistent Volume \u00b6 Create a PV referencing an existing 10GiB volume on the array, replace .spec.csi.volumeHandle with the array volume ID. Warning If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem. apiVersion: v1 kind: PersistentVolume metadata: name: my-static-pv-1 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi csi: volumeHandle: driver: csi.hpe.com fsType: xfs volumeAttributes: volumeAccessMode: mount fsType: xfs controllerPublishSecretRef: name: hpe-backend namespace: hpe-storage nodePublishSecretRef: name: hpe-backend namespace: hpe-storage controllerExpandSecretRef: name: hpe-backend namespace: hpe-storage persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem Tip Remove .spec.csi.controllerExpandSecretRef to disallow volume expansion. Persistent Volume Claim \u00b6 Now, a user may claim the static PV by creating a PVC referencing the PV name in .spec.volumeName . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi volumeName: my-static-pv-1 storageClassName: \"\"","title":"HPE Alletra 5000/6000 and Nimble"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#introduction","text":"The HPE Alletra 5000/6000 and Nimble Storage Container Storage Provider (\"CSP\") for Kubernetes is the reference implementation for the HPE CSI Driver for Kubernetes. The CSP abstracts the data management capabilities of the array for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass and VolumeSnapshotClass parameters but also contains important array setup requirements. Important For a successful deployment, it's important to understand the array platform requirements found within the CSI driver (compute node OS and Kubernetes versions) and the CSP. Introduction Platform Requirements Setting Up the Array Single Tenant Deployment Multitenant Deployment Tenant Limitations Limitations StorageClass Parameters Common Parameters for Provisioning and Cloning Provisioning Parameters Cloning Parameters Import Parameters Pod Inline Volume Parameters (Local Ephemeral Volumes) VolumeGroupClass Parameters VolumeSnapshotClass Parameters Static Provisioning Persistent Volume Persistent Volume Claim Seealso There's a brief introduction on how to use HPE Nimble Storage with the HPE CSI Driver in the Video Gallery. It also applies broadly to HPE Alletra 5000/6000.","title":"Introduction"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#platform_requirements","text":"Always check the corresponding CSI driver version in compatibility and support for the required array Operating System (\"OS\") version for a particular release of the driver. If a certain feature is gated against a certain version of the array OS it will be called out where applicable. Tip The documentation reflected here always corresponds to the latest supported version and may contain references to future features and capabilities.","title":"Platform Requirements"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#setting_up_the_array","text":"How to deploy an HPE storage array is beyond the scope of this document. Please refer to HPE InfoSight for further reading. Important The HPE Nimble Storage Linux Toolkit (NLT) is not compatible with the HPE CSI Driver for Kubernetes. Do not install NLT on Kubernetes compute nodes. It may be installed on Kubernetes control plane nodes if they use iSCSI or FC storage from the array.","title":"Setting Up the Array"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#single_tenant_deployment","text":"The CSP requires access to a user with either poweruser or the administrator role. It's recommended to use the poweruser role for least privilege practices. Tip It's highly recommended to deploy a multitenant setup.","title":"Single Tenant Deployment"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#multitenant_deployment","text":"In array OS 6.0.0 and newer it's possible to create separate tenants using the tenantadmin CLI to assign folders to a tenant. This creates a secure and logical separation of storage resources between Kubernetes clusters. No special configuration is needed on the Kubernetes cluster when using a tenant account or a regular user account. It's important to understand from a provisioning perspective that if the tenant account being used has been assigned multiple folders, the CSP will pick the folder with the most space available. If this is not desirable and a 1:1 StorageClass to Folder mapping is needed, the \"folder\" parameter needs to be called out in the StorageClass . For reference, as of array OS 6.0.0, this is the tenantadmin command synopsis. $ tenantadmin --help Usage: tenantadmin [options] Manage Tenants. Available options are: --help Program help. --list List Tenants. --info name Tenant info. --add tenant_name Add a tenant. --folders folders List of folder paths (comma separated pool_name:fqn) the tenant will be able to access (mandatory). --remove name Remove a tenant. --add_folder tenant_name Add a folder path for tenant access. --name folder_name Name of the folder path (pool_name:fqn) to be added (mandatory). --remove_folder tenant_name Remove a folder path from tenant access. --name folder_name Name of the folder path (pool_name:fqn) to be removed (mandatory). --passwd Change tenant's login password. --tenant name Change a specific tenant's login password (mandatory). Caution The tenantadmin command may only be run by local array OS administrators. LDAP or Active Directory accounts, regardless of role, are not supported. Visit the array admin guide on HPE InfoSight to learn more about how to use the tenantadmin CLI.","title":"Multitenant Deployment"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#tenant_limitations","text":"Some features may be limited and restricted in a multitenant deployment, such as arbitrarily import volumes in folders from the array the tenant isn't a user of, here are a few less obvious limitations. CHAP is configured globally for the CSI driver. The CSI driver is contracted to create the CHAP user if it doesn't exist. It's important that the CHAP user does not exist prior when used with a tenant, as tenant may not share CHAP users among themselves or the admin account. Both port 443 and 5392 needs to be exposed to the Kubernetes cluster in multitenant deployments. Seealso An in-depth tutorial on how to use multitenancy and the tenantadmin CLI is available on HPE Developer: Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble Storage . There's also a high level overview of multitenancy available as a lightboard presentation on YouTube .","title":"Tenant Limitations"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#limitations","text":"Consult the compatibility and support table for supported array OS versions. CSI and CSP specific limitations are listed below. Striped volumes on grouped arrays are not supported by the CSI driver. The CSP is not capable of provisioning or importing volumes protected by Peer Persistence. When using an FC only array and provisioning RWX block volumes, the \"multi_initiator\" attribute won't get set properly on the volume. The workaround is to run group --edit --iscsi_enabled yes on the Array OS CLI.","title":"Limitations"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#storageclass_parameters","text":"A StorageClass is used to provision or clone a persistent volume. It can also be used to import an existing volume or clone a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. Common parameters for provisioning and cloning Provisioning Parameters Cloning Parameters Import Parameters Pod Inline Volume Parameters (Local Ephemeral Volumes) VolumeGroupClass Parameters VolumeSnapshotClass Parameters Backward compatibility with the HPE Nimble Storage FlexVolume driver is being honored to a certain degree. StorageClass API objects needs be rewritten and parameters need to be updated regardless. Please see using the HPE CSI Driver for base StorageClass examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities. Note These are optional parameters unless specified.","title":"StorageClass Parameters"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#common_parameters_for_provisioning_and_cloning","text":"These parameters are mutable between a parent volume and creating a clone from a snapshot. Parameter String Description accessProtocol 1 Text The access protocol to use when accessing the persistent volume (\"fc\" or \"iscsi\"). Defaults to \"iscsi\" when unspecified. destroyOnDelete Boolean Indicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to \"false\" which means volumes needs to be pruned manually. limitIops Integer The IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default). limitMbps Integer The MB/s throughput limit for the volume between 1 and 4294967294, or -1 for unlimited (default). description Text Text to be added to the volume's description on the array. Empty string by default. performancePolicy 2 Text The name of the performance policy to assign to the volume. Default example performance policies include \"Backup Repository\", \"Exchange 2003 data store\", \"Exchange 2007 data store\", \"Exchange 2010 data store\", \"Exchange log\", \"Oracle OLTP\", \"Other Workloads\", \"SharePoint\", \"SQL Server\", \"SQL Server 2012\", \"SQL Server Logs\". Defaults to the \"default\" performance policy. protectionTemplate 4 Text The name of the protection template to assign to the volume. Default examples of protection templates include \"Retain-30Daily\", \"Retain-48Hourly-30Daily-52Weekly\", and \"Retain-90Daily\". folder Text The name of the folder in which to place the volume. Defaults to the root of the \"default\" pool. thick Boolean Indicates that the volume should be thick provisioned. Defaults to \"false\" dedupeEnabled 3 Boolean Indicates that the volume should enable deduplication. Defaults to \"true\" when available. syncOnDetach Boolean Indicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node. Defaults to \"false\". Restrictions applicable when using the CSI volume mutator : 1 = Parameter is immutable and can't be altered after provisioning/cloning. 2 = Performance policies may only be mutated between performance polices with the same block size. 3 = Deduplication may only be mutated within the same performance policy application category and block size. 4 = This parameter was removed in HPE CSI Driver 1.4.0 and replaced with VolumeGroupClasses . Note Performance Policies, Folders and Protection Templates are array OS specific constructs that can be created on the array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight .","title":"Common Parameters for Provisioning and Cloning"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#provisioning_parameters","text":"These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes. Parameter String Description encrypted Boolean Indicates that the volume should be encrypted. Defaults to \"false\". pool Text The name of the pool in which to place the volume. Defaults to the \"default\" pool.","title":"Provisioning Parameters"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#cloning_parameters","text":"Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference an array volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created.","title":"Cloning Parameters"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#import_parameters","text":"Importing volumes to Kubernetes requires the source array volume to be offline. In case of reverse replication, the upstream volume should be in offline state. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE CSI Driver. Parameter String Description importVolumeName Text The name of the array volume to import. snapshot Text The name of the array snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored. takeover Boolean Indicates the current group will takeover ownership of the array volume and volume collection. This should be performed against a downstream replica. reverseReplication Boolean Reverses the replication direction so that writes to the array volume are replicated back to the group where it was replicated from. forceImport Boolean Forces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead. Seealso In this HPE Developer blog post you'll learn how to use the import parameters to lift and transform applications from traditional infrastructure to Kubernetes using the HPE CSI Driver.","title":"Import Parameters"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#pod_inline_volume_parameters_local_ephemeral_volumes","text":"These parameters are applicable only for Pod inline volumes and to be specified within Pod spec. Parameter String Description csi.storage.k8s.io/ephemeral Boolean Indicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to \"true\". inline-volume-secret-name Text A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. inline-volume-secret-namespace Text The namespace of inline-volume-secret-name for ephemeral inline volume. size Text The size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used. accessProtocol Text Storage access protocol to use, \"iscsi\" or \"fc\". Important All parameters are required for inline ephemeral volumes.","title":"Pod Inline Volume Parameters (Local Ephemeral Volumes)"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#volumegroupclass_parameters","text":"If basic data protection is required and performed on the array, VolumeGroups needs to be created, even it's just a single volume that needs data protection using snapshots and replication. Learn more about VolumeGroups in the provisioning concepts documentation . Parameter String Description description Text Text to be added to the volume collection description on the array. Empty by default. protectionTemplate Text The name of the protection template to assign to the volume collection. Default examples of protection templates include \"Retain-30Daily\", \"Retain-48Hourly-30Daily-52Weekly\", and \"Retain-90Daily\". Empty by default, meaning no array snapshots are performed on the VolumeGroups . New feature VolumeGroupClasses were introduced with version 1.4.0 of the CSI driver. Learn more in the Using section .","title":"VolumeGroupClass Parameters"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#volumesnapshotclass_parameters","text":"These parametes are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description description Text Text to be added to the snapshot's description on the array. writable Boolean Indicates if the snapshot is writable on the array. Defaults to \"false\". online Boolean Indicates if the snapshot is set to online on the array. Defaults to \"false\".","title":"VolumeSnapshotClass Parameters"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#static_provisioning","text":"Static provisioning of PVs and PVCs may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass .","title":"Static Provisioning"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#persistent_volume","text":"Create a PV referencing an existing 10GiB volume on the array, replace .spec.csi.volumeHandle with the array volume ID. Warning If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem. apiVersion: v1 kind: PersistentVolume metadata: name: my-static-pv-1 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi csi: volumeHandle: driver: csi.hpe.com fsType: xfs volumeAttributes: volumeAccessMode: mount fsType: xfs controllerPublishSecretRef: name: hpe-backend namespace: hpe-storage nodePublishSecretRef: name: hpe-backend namespace: hpe-storage controllerExpandSecretRef: name: hpe-backend namespace: hpe-storage persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem Tip Remove .spec.csi.controllerExpandSecretRef to disallow volume expansion.","title":"Persistent Volume"},{"location":"container_storage_provider/hpe_alletra_6000/index.html#persistent_volume_claim","text":"Now, a user may claim the static PV by creating a PVC referencing the PV name in .spec.volumeName . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi volumeName: my-static-pv-1 storageClassName: \"\"","title":"Persistent Volume Claim"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html","text":"Introduction \u00b6 The HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) for Kubernetes is part of the HPE CSI Driver for Kubernetes . The CSP abstract the data management capabilities of the array for use by Kubernetes. Note The HPE CSI Driver for Kubernetes is only compatible with HPE Alletra Storage MP running with block services, such as HPE GreenLake for Block Storage. Introduction Platform Requirements Network Port Requirements User Role Requirements Virtual Domains VLUN Templates Change VLUN Template for existing PVCs StorageClass Parameters Common Provisioning Parameters Cloning Parameters Array Snapshot Parameters Import Parameters Remote Copy with Peer Persistence Synchronous Replication Parameters Add Non-Replicated Volume to Remote Copy group VolumeSnapshotClass Parameters VolumeGroupClass Parameters SnapshotGroupClass Parameters Static Provisioning Prerequisites HPEVolumeInfo Persistent Volume Persistent Volume Claim Support Note For help getting started with deploying the HPE CSI Driver using HPE Alletra Storage MP, Alletra 9000, Primera or 3PAR storage, check out the tutorial over at HPE Developer . Platform Requirements \u00b6 Check the corresponding CSI driver version in the compatibility and support table for the latest updates on supported Kubernetes version, orchestrators and host OS. Network Port Requirements \u00b6 The HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Container Storage Provider requires the following TCP ports to be open inbound to the array from the Kubernetes cluster worker nodes running the HPE CSI Driver for Kubernetes. Port Protocol Description 443 HTTPS WSAPI (HPE Alletra Storage MP, Alletra 9000/Primera) 8080 HTTPS WSAPI (HPE 3PAR) 22 SSH Array communication User Role Requirements \u00b6 The CSP requires access to a local user with either edit or the super role. It's recommended to use the edit role for security best practices. Note LDAP users are not supported by the CSP. Virtual Domains \u00b6 Virtual Domains are not yet fully supported by the CSP. From HPE CSI Driver v2.5.0, it's possible to manually create the Kubernetes hosts connecting to storage within the Virtual Domain. Once the hosts have been created, deploy the CSI driver with the Helm chart using the \"disableHostDeletion\" parameter set to \"true\". The Virtual Domain user may create the hosts through the Virtual Domain if the \"AllowDomainUsersAffectNoDomain\" parameter is set to either \"hostonly\" or \"yes\" on the array. Note Remote Copy Groups managed by the CSP have not been tested with Virtual Domains at this time. VLUN Templates \u00b6 A VLUN template enables the export of a virtual volume as a VLUN to hosts. For more information, see the HPE Primera OS Commmand Line Interface - Installation and Reference Guide . The CSP supports the following types of VLUN templates: Template Description Matched set The default VLUN template. The VLUN is visible to initiators with the host's WWNs only on the specified port(s). Host sees The VLUN is visible to the initiators with any of the host's WWNs. The boolean string \"hostSeesVLUN\" StorageClass parameter controls which VLUN template to use. Recommendation In most scenarios, \"hostSeesVLUN\" should be set to \"true\". Change VLUN Template for existing PVCs \u00b6 To modify an existing PVC , \"hostSeesVLUN\" needs to be specified with the \"allowMutations\" parameter along with adding the PVC annotation \"csi.hpe.com/hostSeesVLUN\" with the string values of either \"true\" or \"false\". The HPE CSI Driver creates the VLUN template based upon the hostSeesVLUN parameter during the volume publish operation. For the change to take effect, the Pod will need to be scheduled on another node by either deleting the Pod or draining the node. StorageClass Parameters \u00b6 All parameters enumerated reflects the current version and may contain unannounced features and capabilities. Common Provisioning Parameters \u00b6 Parameter Option Description accessProtocol ( Required ) fc or iscsi The access protocol to use when attaching the persistent volume. cpg 1 Text The name of existing CPG to be used for volume provisioning. If the cpg parameter is not specified, the CSP will select a CPG available to the array. snapCpg 1 Text The name of the snapshot CPG to be used for volume provisioning. Needs to be set if any kind of VolumeSnapshots or PVC cloning parameters are used. compression 1 Boolean Indicates that the volume should be compressed. (3PAR only) provisioningType 1 tpvv Default. Indicates Thin provisioned volume type. full 3 Indicates Full provisioned volume type. dedup 3 Indicates Thin Deduplication volume type. reduce 4 Indicates Data Reduction volume type. hostSeesVLUN Boolean Enable \"host sees\" VLUN template. importVolumeName Text Name of the volume to import. importVolAsClone Text Name of the volume to clone and import. cloneOf 2 Text Name of the PersistentVolumeClaim to clone. virtualCopyOf 2 Text Name of the PersistentVolumeClaim to snapshot. qosName Text Name of the volume set which has QoS rules applied. remoteCopyGroup 1 Text Name of a new or existing Remote Copy group on the array. replicationDevices Text Indicates name of custom resource of type hpereplicationdeviceinfos . allowBatchReplicatedVolumeCreation Boolean Enable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group. During this process, the Remote Copy group is stopped and started once. oneRcgPerPvc Boolean Creates a dedicated Remote Copy group per persistent volume. iscsiPortalIps Text Comma separated list of the array iSCSI port IPs. fcPortsList Text Comma separated list of available FC ports. Example: \"0:5:1,1:4:2,2:4:1,3:4:2\" Default: Use all available ports. Restrictions applicable when using the CSI volume mutator : 1 = Parameters that are editable after provisioning. 2 = Volumes with snapshots/clones can't be modified. 3 = HPE 3PAR only parameter 4 = HPE Primera/Alletra 9000 only parameter Please see using the HPE CSI Driver for additional StorageClass examples like CSI snapshots and clones. Important The HPE CSI Driver allows the PersistentVolumeClaim to override the StorageClass parameters by annotating the PersistentVolumeClaim . Please see Using PVC Overrides for more details. Cloning Parameters \u00b6 Cloning supports two modes of cloning. Either use cloneOf and reference a PersistentVolumeClaim in the current namespace to clone or use importVolAsClone and reference an array volume name to clone and import into the Kubernetes cluster. Volumes with clones are immutable once created. Parameter Option Description cloneOf Text The name of the PersistentVolumeClaim to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. accessProtocol fc or iscsi The access protocol to use when attaching the cloned volume. Important \u2022 No other parameters are required in the StorageClass while cloning outside of those parameters listed in the table above. \u2022 Cloning using above parameters is independent of snapshot CRD availability on Kubernetes and it can be performed on any supported Kubernetes version. \u2022 Support for importVolAsClone and cloneOf is available from HPE CSI Driver 1.3.0+. Array Snapshot Parameters \u00b6 During the snapshotting process, any existing PersistentVolumeClaim defined in the virtualCopyOf parameter within a StorageClass , will be snapped as PersistentVolumeClaim and exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Volumes with snapshots are immutable once created. Parameter Option Description accessProtocol fc or iscsi The access protocol to use when attaching the snapshot volume. virtualCopyOf Text The name of existing PersistentVolumeClaim to be snapped Important \u2022 No other parameters are required in the StorageClass when snapshotting a volume outside of those parameters listed in the table above. \u2022 Snapshotting using virtualCopyOf is independent of snapshot CRD availability on Kubernetes and it can be performed on any supported Kubernetes version. \u2022 Support for virtualCopyOf is available from HPE CSI Driver 1.3.0+. Import Parameters \u00b6 During the import volume process, any legacy (non-container volumes) defined in the ImportVol parameter, within a StorageClass , will be renamed to match the PersistentVolumeClaim that leverages the StorageClass . The new volumes will be exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Note: All previous Access Control Records and Initiator Groups will be removed from the volume when it is imported. Parameter Option Description accessProtocol fc or iscsi The access protocol to use when importing the volume. importVolumeName Text The name of the array volume to import. Important \u2022 No other parameters are required in the StorageClass when importing a volume outside of those parameters listed in the table above. \u2022 Support for importVolumeName is available from HPE CSI Driver 1.2.0+. Remote Copy with Peer Persistence Synchronous Replication Parameters \u00b6 To enable replication within the HPE CSI Driver, the following steps must be completed: Create Secrets for both primary and target arrays. Refer to Configuring Additional Storage Backends . Create replication custom resource. Create replication enabled StorageClass . For a tutorial on how to enable replication, check out the blog Enabling Remote Copy using the HPE CSI Driver for Kubernetes on HPE Primera A Custom Resource Definition (CRD) of type hpereplicationdeviceinfos.storage.hpe.com must be created to define the target array information. The CRD object name will be used to define the StorageClass parameter replicationDevices . CRD mandatory parameters: targetCpg , targetName , targetSecret and targetSecretNamespace . HPE CSI Driver v2.1.0 and later apiVersion: storage.hpe.com/v2 kind: HPEReplicationDeviceInfo metadata: name: r1 spec: target_array_details: - targetCpg: targetSnapCpg: #optional. targetName: targetSecret: targetSecretNamespace: hpe-storage HPE CSI Driver v2.0.0 apiVersion: storage.hpe.com/v1 kind: HPEReplicationDeviceInfo metadata: name: r1 spec: target_array_details: - targetCpg: targetSnapCpg: #optional. targetName: targetSecret: targetSecretNamespace: hpe-storage Important The HPE CSI Driver only supports Remote Copy Peer Persistence mode. These parameters are applicable only for replication. Both parameters are mandatory. If the Remote Copy volume group (RCG) name, as defined within the StorageClass , does not exist on the array, then a new RCG will be created. Parameter Option Description remoteCopyGroup Text Name of new or existing Remote Copy group 1 on the array. replicationDevices Text Indicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD). allowBatchReplicatedVolumeCreation Boolean Enable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group. (Optional) During this process, the Remote Copy group is stopped and started once. oneRcgPerPvc Boolean Creates a dedicated Remote Copy group per persistent volume. (Optional) Remote Copy additional details: 1 = Existing RCG must have CPG and Copy CPG configured. Link to HPE Primera OS: Configuring data replication using Remote Copy Important Remote Copy groups (RCG) created by the HPE CSI driver 2.1 and later have the Auto synchronize and Auto recover policies applied. To add or remove these policies from RCGs, modify the existing RCG using the SSMC or CLI with the following command: Add setrcopygroup pol auto_recover,auto_synchronize Remove setrcopygroup pol no_auto_recover,no_auto_synchronize Add Non-Replicated Volume to Remote Copy group \u00b6 To add a non-replicated volume to an existing Remote Copy group, allowMutations: description at minimum must be defined within the StorageClass . Refer to Remote Copy with Peer Persistence Replication for more details. Edit the non-replicated PVC and annotate the following parameters: Parameter Option Description remoteCopyGroup Text Name of existing Remote Copy group. oneRcgPerPvc Boolean Creates a dedicated Remote Copy group per persistent volume. (Optional) replicationDevices Text Indicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD). Note remoteCopyGroup and oneRcgPerPvc parameters are mutually exclusive and cannot be added together when editing a PVC . VolumeSnapshotClass Parameters \u00b6 These parameters are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description read_only Boolean Indicates if the snapshot is writable on the array. VolumeGroupClass Parameters \u00b6 In the HPE CSI Driver version 1.4.0+, a volume set with QoS settings can be created dynamically using the QoS parameters for the VolumeGroupClass . The following parameters are available for a VolumeGroup on the array. Learn more about VolumeGroups in the provisioning concepts documentation . Parameter String Description description Text An identifier to describe the VolumeGroupClass . Example: \"My VolumeGroupClass\" priority Text The priority level for the target volume set. Example: \"low\", \"normal\", \"high\" ioMinGoal Text IOPS minimum goal for the target volume set. Example: \"300\" ioMaxLimit Text IOPS maximum limit for the target volume set. Example: \"10000\" bwMinGoalKb Text Bandwidth minimum goal in kilobytes per second for the target volume set. Example: \"300\" bwMaxLimitKb Text Bandwidth maximum limit in kilobytes per second for the target volume set. Example: \"30000\" latencyGoal Text Latency goal in milliseconds (ms) or microseconds(us) for the target volume set. Example: \"300ms\" or \"500us\" domain Text The array Virtual Domain, with which the volume group and related objects are associated with. Example: \"sample_domain\" Important All QoS parameters are mandatory when creating a VolumeGroupClass on the array. Example: apiVersion: storage.hpe.com/v1 kind: VolumeGroupClass metadata: name: my-volume-group-class provisioner: csi.hpe.com deletionPolicy: Delete parameters: description: \"HPE CSI Driver for Kubernetes Volume Group\" csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage priority: normal ioMinGoal: \"300\" ioMaxLimit: \"10000\" bwMinGoalKb: \"3000\" bwMaxLimitKb: \"30000\" latencyGoal: \"300ms\" SnapshotGroupClass Parameters \u00b6 These parameters are for SnapshotGroupClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description read_only Boolean Indicates if the snapshot is writable on the array. Static Provisioning \u00b6 Static provisioning of PVs and PVCs may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass . Prerequisites \u00b6 The CSP expects a certain naming convention for PersistentVolumes and Virtual Volumes on the array. Persistent Volume: pvc-00000000-0000-0000-0000-000000000000 Virtual Volume: pvc-00000000-0000-0000-0000-000 Note The zeroes are used as examples. They can be replaced with any hexadecimal from 0 to f . Establishing a scheme may be important if static provisioning is going to be the main method of providing persistent storage to workloads. The following example uses the above scheme as a naming convention. Have a storage administrator rename the existing Virtual Volume on the array: setvv -name pvc-00000000-0000-0000-0000-000 my-existing-virtual-volume HPEVolumeInfo \u00b6 Create a new HPEVolumeInfo resource. apiVersion: storage.hpe.com/v2 kind: HPEVolumeInfo metadata: name: pvc-00000000-0000-0000-0000-000000000000 spec: record: Id: pvc-00000000-0000-0000-0000-000000000000 Name: pvc-00000000-0000-0000-0000-000 uuid: pvc-00000000-0000-0000-0000-000000000000 Persistent Volume \u00b6 Create a PV referencing the HPEVolumeInfo resource. Warning If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem. apiVersion: v1 kind: PersistentVolume metadata: name: pvc-00000000-0000-0000-0000-000000000000 spec: accessModes: - ReadWriteOnce capacity: storage: 16Gi csi: volumeHandle: pvc-00000000-0000-0000-0000-000000000000 driver: csi.hpe.com fsType: xfs volumeAttributes: volumeAccessMode: mount fsType: xfs controllerPublishSecretRef: name: hpe-backend namespace: hpe-storage nodePublishSecretRef: name: hpe-backend namespace: hpe-storage controllerExpandSecretRef: name: hpe-backend namespace: hpe-storage persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem Tip Remove .spec.csi.controllerExpandSecretRef to disallow volume expansion. Persistent Volume Claim \u00b6 Now, a user may claim the static PV by creating a PVC referencing the PV name in .spec.volumeName . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Gi volumeName: my-static-pv-1 storageClassName: \"\" Support \u00b6 Please refer to the HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage CSP support statement .","title":"HPE Alletra Storage MP and Alletra 9000/Primera/3PAR"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#introduction","text":"The HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) for Kubernetes is part of the HPE CSI Driver for Kubernetes . The CSP abstract the data management capabilities of the array for use by Kubernetes. Note The HPE CSI Driver for Kubernetes is only compatible with HPE Alletra Storage MP running with block services, such as HPE GreenLake for Block Storage. Introduction Platform Requirements Network Port Requirements User Role Requirements Virtual Domains VLUN Templates Change VLUN Template for existing PVCs StorageClass Parameters Common Provisioning Parameters Cloning Parameters Array Snapshot Parameters Import Parameters Remote Copy with Peer Persistence Synchronous Replication Parameters Add Non-Replicated Volume to Remote Copy group VolumeSnapshotClass Parameters VolumeGroupClass Parameters SnapshotGroupClass Parameters Static Provisioning Prerequisites HPEVolumeInfo Persistent Volume Persistent Volume Claim Support Note For help getting started with deploying the HPE CSI Driver using HPE Alletra Storage MP, Alletra 9000, Primera or 3PAR storage, check out the tutorial over at HPE Developer .","title":"Introduction"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#platform_requirements","text":"Check the corresponding CSI driver version in the compatibility and support table for the latest updates on supported Kubernetes version, orchestrators and host OS.","title":"Platform Requirements"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#network_port_requirements","text":"The HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Container Storage Provider requires the following TCP ports to be open inbound to the array from the Kubernetes cluster worker nodes running the HPE CSI Driver for Kubernetes. Port Protocol Description 443 HTTPS WSAPI (HPE Alletra Storage MP, Alletra 9000/Primera) 8080 HTTPS WSAPI (HPE 3PAR) 22 SSH Array communication","title":"Network Port Requirements"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#user_role_requirements","text":"The CSP requires access to a local user with either edit or the super role. It's recommended to use the edit role for security best practices. Note LDAP users are not supported by the CSP.","title":"User Role Requirements"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#virtual_domains","text":"Virtual Domains are not yet fully supported by the CSP. From HPE CSI Driver v2.5.0, it's possible to manually create the Kubernetes hosts connecting to storage within the Virtual Domain. Once the hosts have been created, deploy the CSI driver with the Helm chart using the \"disableHostDeletion\" parameter set to \"true\". The Virtual Domain user may create the hosts through the Virtual Domain if the \"AllowDomainUsersAffectNoDomain\" parameter is set to either \"hostonly\" or \"yes\" on the array. Note Remote Copy Groups managed by the CSP have not been tested with Virtual Domains at this time.","title":"Virtual Domains"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#vlun_templates","text":"A VLUN template enables the export of a virtual volume as a VLUN to hosts. For more information, see the HPE Primera OS Commmand Line Interface - Installation and Reference Guide . The CSP supports the following types of VLUN templates: Template Description Matched set The default VLUN template. The VLUN is visible to initiators with the host's WWNs only on the specified port(s). Host sees The VLUN is visible to the initiators with any of the host's WWNs. The boolean string \"hostSeesVLUN\" StorageClass parameter controls which VLUN template to use. Recommendation In most scenarios, \"hostSeesVLUN\" should be set to \"true\".","title":"VLUN Templates"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#change_vlun_template_for_existing_pvcs","text":"To modify an existing PVC , \"hostSeesVLUN\" needs to be specified with the \"allowMutations\" parameter along with adding the PVC annotation \"csi.hpe.com/hostSeesVLUN\" with the string values of either \"true\" or \"false\". The HPE CSI Driver creates the VLUN template based upon the hostSeesVLUN parameter during the volume publish operation. For the change to take effect, the Pod will need to be scheduled on another node by either deleting the Pod or draining the node.","title":"Change VLUN Template for existing PVCs"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#storageclass_parameters","text":"All parameters enumerated reflects the current version and may contain unannounced features and capabilities.","title":"StorageClass Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#common_provisioning_parameters","text":"Parameter Option Description accessProtocol ( Required ) fc or iscsi The access protocol to use when attaching the persistent volume. cpg 1 Text The name of existing CPG to be used for volume provisioning. If the cpg parameter is not specified, the CSP will select a CPG available to the array. snapCpg 1 Text The name of the snapshot CPG to be used for volume provisioning. Needs to be set if any kind of VolumeSnapshots or PVC cloning parameters are used. compression 1 Boolean Indicates that the volume should be compressed. (3PAR only) provisioningType 1 tpvv Default. Indicates Thin provisioned volume type. full 3 Indicates Full provisioned volume type. dedup 3 Indicates Thin Deduplication volume type. reduce 4 Indicates Data Reduction volume type. hostSeesVLUN Boolean Enable \"host sees\" VLUN template. importVolumeName Text Name of the volume to import. importVolAsClone Text Name of the volume to clone and import. cloneOf 2 Text Name of the PersistentVolumeClaim to clone. virtualCopyOf 2 Text Name of the PersistentVolumeClaim to snapshot. qosName Text Name of the volume set which has QoS rules applied. remoteCopyGroup 1 Text Name of a new or existing Remote Copy group on the array. replicationDevices Text Indicates name of custom resource of type hpereplicationdeviceinfos . allowBatchReplicatedVolumeCreation Boolean Enable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group. During this process, the Remote Copy group is stopped and started once. oneRcgPerPvc Boolean Creates a dedicated Remote Copy group per persistent volume. iscsiPortalIps Text Comma separated list of the array iSCSI port IPs. fcPortsList Text Comma separated list of available FC ports. Example: \"0:5:1,1:4:2,2:4:1,3:4:2\" Default: Use all available ports. Restrictions applicable when using the CSI volume mutator : 1 = Parameters that are editable after provisioning. 2 = Volumes with snapshots/clones can't be modified. 3 = HPE 3PAR only parameter 4 = HPE Primera/Alletra 9000 only parameter Please see using the HPE CSI Driver for additional StorageClass examples like CSI snapshots and clones. Important The HPE CSI Driver allows the PersistentVolumeClaim to override the StorageClass parameters by annotating the PersistentVolumeClaim . Please see Using PVC Overrides for more details.","title":"Common Provisioning Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#cloning_parameters","text":"Cloning supports two modes of cloning. Either use cloneOf and reference a PersistentVolumeClaim in the current namespace to clone or use importVolAsClone and reference an array volume name to clone and import into the Kubernetes cluster. Volumes with clones are immutable once created. Parameter Option Description cloneOf Text The name of the PersistentVolumeClaim to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the array volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. accessProtocol fc or iscsi The access protocol to use when attaching the cloned volume. Important \u2022 No other parameters are required in the StorageClass while cloning outside of those parameters listed in the table above. \u2022 Cloning using above parameters is independent of snapshot CRD availability on Kubernetes and it can be performed on any supported Kubernetes version. \u2022 Support for importVolAsClone and cloneOf is available from HPE CSI Driver 1.3.0+.","title":"Cloning Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#array_snapshot_parameters","text":"During the snapshotting process, any existing PersistentVolumeClaim defined in the virtualCopyOf parameter within a StorageClass , will be snapped as PersistentVolumeClaim and exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Volumes with snapshots are immutable once created. Parameter Option Description accessProtocol fc or iscsi The access protocol to use when attaching the snapshot volume. virtualCopyOf Text The name of existing PersistentVolumeClaim to be snapped Important \u2022 No other parameters are required in the StorageClass when snapshotting a volume outside of those parameters listed in the table above. \u2022 Snapshotting using virtualCopyOf is independent of snapshot CRD availability on Kubernetes and it can be performed on any supported Kubernetes version. \u2022 Support for virtualCopyOf is available from HPE CSI Driver 1.3.0+.","title":"Array Snapshot Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#import_parameters","text":"During the import volume process, any legacy (non-container volumes) defined in the ImportVol parameter, within a StorageClass , will be renamed to match the PersistentVolumeClaim that leverages the StorageClass . The new volumes will be exposed through the HPE CSI Driver and made available to the Kubernetes cluster. Note: All previous Access Control Records and Initiator Groups will be removed from the volume when it is imported. Parameter Option Description accessProtocol fc or iscsi The access protocol to use when importing the volume. importVolumeName Text The name of the array volume to import. Important \u2022 No other parameters are required in the StorageClass when importing a volume outside of those parameters listed in the table above. \u2022 Support for importVolumeName is available from HPE CSI Driver 1.2.0+.","title":"Import Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#remote_copy_with_peer_persistence_synchronous_replication_parameters","text":"To enable replication within the HPE CSI Driver, the following steps must be completed: Create Secrets for both primary and target arrays. Refer to Configuring Additional Storage Backends . Create replication custom resource. Create replication enabled StorageClass . For a tutorial on how to enable replication, check out the blog Enabling Remote Copy using the HPE CSI Driver for Kubernetes on HPE Primera A Custom Resource Definition (CRD) of type hpereplicationdeviceinfos.storage.hpe.com must be created to define the target array information. The CRD object name will be used to define the StorageClass parameter replicationDevices . CRD mandatory parameters: targetCpg , targetName , targetSecret and targetSecretNamespace . HPE CSI Driver v2.1.0 and later apiVersion: storage.hpe.com/v2 kind: HPEReplicationDeviceInfo metadata: name: r1 spec: target_array_details: - targetCpg: targetSnapCpg: #optional. targetName: targetSecret: targetSecretNamespace: hpe-storage HPE CSI Driver v2.0.0 apiVersion: storage.hpe.com/v1 kind: HPEReplicationDeviceInfo metadata: name: r1 spec: target_array_details: - targetCpg: targetSnapCpg: #optional. targetName: targetSecret: targetSecretNamespace: hpe-storage Important The HPE CSI Driver only supports Remote Copy Peer Persistence mode. These parameters are applicable only for replication. Both parameters are mandatory. If the Remote Copy volume group (RCG) name, as defined within the StorageClass , does not exist on the array, then a new RCG will be created. Parameter Option Description remoteCopyGroup Text Name of new or existing Remote Copy group 1 on the array. replicationDevices Text Indicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD). allowBatchReplicatedVolumeCreation Boolean Enable the batch processing of persistent volumes in 10 second intervals and add them to a single Remote Copy group. (Optional) During this process, the Remote Copy group is stopped and started once. oneRcgPerPvc Boolean Creates a dedicated Remote Copy group per persistent volume. (Optional) Remote Copy additional details: 1 = Existing RCG must have CPG and Copy CPG configured. Link to HPE Primera OS: Configuring data replication using Remote Copy Important Remote Copy groups (RCG) created by the HPE CSI driver 2.1 and later have the Auto synchronize and Auto recover policies applied. To add or remove these policies from RCGs, modify the existing RCG using the SSMC or CLI with the following command: Add setrcopygroup pol auto_recover,auto_synchronize Remove setrcopygroup pol no_auto_recover,no_auto_synchronize ","title":"Remote Copy with Peer Persistence Synchronous Replication Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#add_non-replicated_volume_to_remote_copy_group","text":"To add a non-replicated volume to an existing Remote Copy group, allowMutations: description at minimum must be defined within the StorageClass . Refer to Remote Copy with Peer Persistence Replication for more details. Edit the non-replicated PVC and annotate the following parameters: Parameter Option Description remoteCopyGroup Text Name of existing Remote Copy group. oneRcgPerPvc Boolean Creates a dedicated Remote Copy group per persistent volume. (Optional) replicationDevices Text Indicates name of hpereplicationdeviceinfos Custom Resource Definition (CRD). Note remoteCopyGroup and oneRcgPerPvc parameters are mutually exclusive and cannot be added together when editing a PVC .","title":"Add Non-Replicated Volume to Remote Copy group"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#volumesnapshotclass_parameters","text":"These parameters are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description read_only Boolean Indicates if the snapshot is writable on the array.","title":"VolumeSnapshotClass Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#volumegroupclass_parameters","text":"In the HPE CSI Driver version 1.4.0+, a volume set with QoS settings can be created dynamically using the QoS parameters for the VolumeGroupClass . The following parameters are available for a VolumeGroup on the array. Learn more about VolumeGroups in the provisioning concepts documentation . Parameter String Description description Text An identifier to describe the VolumeGroupClass . Example: \"My VolumeGroupClass\" priority Text The priority level for the target volume set. Example: \"low\", \"normal\", \"high\" ioMinGoal Text IOPS minimum goal for the target volume set. Example: \"300\" ioMaxLimit Text IOPS maximum limit for the target volume set. Example: \"10000\" bwMinGoalKb Text Bandwidth minimum goal in kilobytes per second for the target volume set. Example: \"300\" bwMaxLimitKb Text Bandwidth maximum limit in kilobytes per second for the target volume set. Example: \"30000\" latencyGoal Text Latency goal in milliseconds (ms) or microseconds(us) for the target volume set. Example: \"300ms\" or \"500us\" domain Text The array Virtual Domain, with which the volume group and related objects are associated with. Example: \"sample_domain\" Important All QoS parameters are mandatory when creating a VolumeGroupClass on the array. Example: apiVersion: storage.hpe.com/v1 kind: VolumeGroupClass metadata: name: my-volume-group-class provisioner: csi.hpe.com deletionPolicy: Delete parameters: description: \"HPE CSI Driver for Kubernetes Volume Group\" csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage priority: normal ioMinGoal: \"300\" ioMaxLimit: \"10000\" bwMinGoalKb: \"3000\" bwMaxLimitKb: \"30000\" latencyGoal: \"300ms\"","title":"VolumeGroupClass Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#snapshotgroupclass_parameters","text":"These parameters are for SnapshotGroupClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. Volumes with snapshots are immutable. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description read_only Boolean Indicates if the snapshot is writable on the array.","title":"SnapshotGroupClass Parameters"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#static_provisioning","text":"Static provisioning of PVs and PVCs may be used when absolute control over physical volumes are required by the storage administrator. This CSP also supports importing volumes and clones of volumes using the import parameters in a StorageClass .","title":"Static Provisioning"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#prerequisites","text":"The CSP expects a certain naming convention for PersistentVolumes and Virtual Volumes on the array. Persistent Volume: pvc-00000000-0000-0000-0000-000000000000 Virtual Volume: pvc-00000000-0000-0000-0000-000 Note The zeroes are used as examples. They can be replaced with any hexadecimal from 0 to f . Establishing a scheme may be important if static provisioning is going to be the main method of providing persistent storage to workloads. The following example uses the above scheme as a naming convention. Have a storage administrator rename the existing Virtual Volume on the array: setvv -name pvc-00000000-0000-0000-0000-000 my-existing-virtual-volume","title":"Prerequisites"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#hpevolumeinfo","text":"Create a new HPEVolumeInfo resource. apiVersion: storage.hpe.com/v2 kind: HPEVolumeInfo metadata: name: pvc-00000000-0000-0000-0000-000000000000 spec: record: Id: pvc-00000000-0000-0000-0000-000000000000 Name: pvc-00000000-0000-0000-0000-000 uuid: pvc-00000000-0000-0000-0000-000000000000","title":"HPEVolumeInfo"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#persistent_volume","text":"Create a PV referencing the HPEVolumeInfo resource. Warning If a filesystem can't be detected on the device a new filesystem will be created. If the volume contains data, make sure the data reside in a whole device filesystem. apiVersion: v1 kind: PersistentVolume metadata: name: pvc-00000000-0000-0000-0000-000000000000 spec: accessModes: - ReadWriteOnce capacity: storage: 16Gi csi: volumeHandle: pvc-00000000-0000-0000-0000-000000000000 driver: csi.hpe.com fsType: xfs volumeAttributes: volumeAccessMode: mount fsType: xfs controllerPublishSecretRef: name: hpe-backend namespace: hpe-storage nodePublishSecretRef: name: hpe-backend namespace: hpe-storage controllerExpandSecretRef: name: hpe-backend namespace: hpe-storage persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem Tip Remove .spec.csi.controllerExpandSecretRef to disallow volume expansion.","title":"Persistent Volume"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#persistent_volume_claim","text":"Now, a user may claim the static PV by creating a PVC referencing the PV name in .spec.volumeName . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Gi volumeName: my-static-pv-1 storageClassName: \"\"","title":"Persistent Volume Claim"},{"location":"container_storage_provider/hpe_alletra_storage_mp/index.html#support","text":"Please refer to the HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage CSP support statement .","title":"Support"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html","text":"Expired content The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within. Introduction \u00b6 The HPE Cloud Volumes CSP integrates seamlessly with the HPE Cloud Volumes Block service in the public cloud. The CSP abstracts the data management capabilities of the storage service for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass and VolumeSnapshotClass parameters but also contains important HPE Cloud Volumes Block configuration details. Important The HPE Cloud Volumes CSP is currently in beta and available as a Tech Preview on Amazon EKS only. Please see the 1.5.0-beta Helm chart . Introduction Cloud requirements Instance metadata Available regions Limitations StorageClass parameters Common parameters for provisioning and cloning Provisioning parameters Pod inline volume parameters (Local Ephemeral Volumes) Cloning parameters Import parameters VolumeSnapshotClass parameters Seealso There's a Tech Preview available in the Video Gallery on how to get started with the HPE Cloud Volumes CSP with the HPE CSI Driver. Cloud requirements \u00b6 Always check the corresponding CSI driver version in compatibility and support for basic requirements (such as supported Kubernetes version and cloud instance OS). If a certain feature is gated against any particular cloud provider it will be called out where applicable. Hyperscaler Managed Kubernetes BYO Kubernetes Status Amazon Web Services Elastic Kubernetes Service (EKS) N/A Tech Preview Microsoft Azure Azure Kubernetes Service (AKS) TBA TBA Google Cloud Google Kubernetes Engine (GKE) TBA TBA Additional hyperscaler support and BYO capabilities may become available in a future release of the CSP. Instance metadata \u00b6 Kubernetes compute nodes will need to have access to the cloud provider's metadata services. This varies by cloud provider and is taken care of automatically by the HPE Cloud Volume CSP. The provided values may be overridden in the StorageClass , see common parameters for more information. Available regions \u00b6 The HPE Cloud Volumes CSP may be deployed in the regions where the managed Kubernetes service control planes intersect with the HPE Cloud Volumes Block service. Region EKS Azure Google Americas us-east-1, us-west-2 TBA TBA Europe eu-west-1, eu-west-2 TBA TBA Asia Pacific ap-northest-1 TBA TBA Consider this table a snapshot of a particular moment in time and consult with the respective hyperscalers and the HPE Cloud Volumes Block service for definitive availability. Note In other regions where HPE Cloud Volumes provide services, such as us-west-1, but cloud providers has no managed Kubernetes service; BYO Kubernetes is the only available option when it becomes available as a supported feature of the CSP. Limitations \u00b6 Consult the compatibility and support table for generic limitations and requirements. CSI and CSP specific limitations with HPE Cloud Volumes Block is listed below. The Volume Group Provisioner and Volume Group Snapshotter sidecars are currently not implemented in the HPE Cloud Volumes CSP. The base CSI driver parameter description is ignored by the CSP. In some cases, your a \"regionID\" needs to be supplied in the StorageClass and in conjunction with Ephemeral Inline Volumes. Your \"regionID\" may only be found in the APIs. Join us on Slack if you're hitting this issue (it can be seen in the CSP logs). Tip While not a limitation, iSCSI CHAP is mandatory with HPE Cloud Volumes but does not need to be configured within the CSI driver. The CHAP credentials are queried through the REST APIs from the HPE Cloud Volumes account session and applied automatically during runtime. StorageClass parameters \u00b6 A StorageClass is used to provision or clone an HPE Cloud Volumes Block-backed persistent volume. It can also be used to import an existing Cloud Volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. Common parameters for provisioning and cloning Provisioning parameters Pod inline volume parameters (Local Ephemeral Volumes) Cloning parameters Import parameters VolumeSnapshotClass parameters Please see using the HPE CSI Driver for base StorageClass examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities. Note All parameters are optional unless documented as mandatory for a particular use case. Common parameters for provisioning and cloning \u00b6 These parameters are mutable between a parent volume and creating a clone from a snapshot. Parameter String Description destroyOnDelete Boolean Indicates the backing Cloud Volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to \"false\" which means volumes needs to be pruned manually in the Cloud Volume service. limitIops Integer The IOPS limit of the volume. The IOPS limit should be in the range 300 (default) to 20000. performancePolicy 1 Text The name of the performance policy to assign to the volume. Available performance policies: \"Exchange\", \"Oracle\", \"SharePoint\", \"SQL\", \"Windows File Server\". Defaults to \"Other Workloads\". schedule Text Snapshot schedule to assign to the volumes. Available schedules: \"hourly\", \"daily\", \"twicedaily\", \"weekly\", \"monthly\", \"none\". Defaults to \"daily\". retentionPolicy Integer Retention policy to assign to the schedule . The parameter must be paired properly with the schedule . hourly: 6, 12, 24 daily: 3, 7, 14 twicedaily: 4, 8, 14 weekly: 2, 4, 8 monthly: 3, 6, 12 Defaults to \"3\" paired with the \"daily\" retentionPolicy . privateCloud 1 Text Override the compute instance provided VPC/VNET. existingCloudSubnet 1 Text Override the compute instance provided subnet. automatedConnection 1 Boolean Override the HPE Cloud Volumes configured setting for connection automation. Connections between HPE Cloud Volumes and the desired VPC/VNET needs to be provisioned manually if set to \"false\". Restrictions applicable when using the CSI volume mutator : 1 = Parameter is immutable and can't be altered after provisioning/cloning. Provisioning parameters \u00b6 These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes. Parameter String Description volumeType Text Volume type, General Purpose Flash (\"GPF\") or Premium Flash (\"PF\"). Defaults to \"PF\" Pod inline volume parameters (Local Ephemeral Volumes) \u00b6 These parameters are applicable only for Pod inline volumes and to be specified within Pod spec. Parameter String Description csi.storage.k8s.io/ephemeral Boolean Indicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to \"true\". inline-volume-secret-name Text A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. inline-volume-secret-namespace Text The namespace of inline-volume-secret-name for ephemeral inline volume. size Text The size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used. Important All parameters are required for inline ephemeral volumes. Cloning parameters \u00b6 Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Cloud Volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the Cloud Volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. replStore Text Name of the Cloud Volume Replication Store to look for volumes, defaults to look outside of Replication Stores Import parameters \u00b6 Importing volumes to Kubernetes requires the source Cloud Volume to be disconnected. Parameter String Description importVolumeName Text The name of the Cloud Volume to import. forceImport Boolean Allows import of volumes created on a different Kubernetes cluster other than the one importing the volume to. VolumeSnapshotClass parameters \u00b6 These parametes are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description description Text Text to be added to the snapshot's description in the Cloud Volume service (optional)","title":"Index"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#introduction","text":"The HPE Cloud Volumes CSP integrates seamlessly with the HPE Cloud Volumes Block service in the public cloud. The CSP abstracts the data management capabilities of the storage service for use by Kubernetes. The documentation found herein is mainly geared towards day-2 operations and reference documentation for the StorageClass and VolumeSnapshotClass parameters but also contains important HPE Cloud Volumes Block configuration details. Important The HPE Cloud Volumes CSP is currently in beta and available as a Tech Preview on Amazon EKS only. Please see the 1.5.0-beta Helm chart . Introduction Cloud requirements Instance metadata Available regions Limitations StorageClass parameters Common parameters for provisioning and cloning Provisioning parameters Pod inline volume parameters (Local Ephemeral Volumes) Cloning parameters Import parameters VolumeSnapshotClass parameters Seealso There's a Tech Preview available in the Video Gallery on how to get started with the HPE Cloud Volumes CSP with the HPE CSI Driver.","title":"Introduction"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#cloud_requirements","text":"Always check the corresponding CSI driver version in compatibility and support for basic requirements (such as supported Kubernetes version and cloud instance OS). If a certain feature is gated against any particular cloud provider it will be called out where applicable. Hyperscaler Managed Kubernetes BYO Kubernetes Status Amazon Web Services Elastic Kubernetes Service (EKS) N/A Tech Preview Microsoft Azure Azure Kubernetes Service (AKS) TBA TBA Google Cloud Google Kubernetes Engine (GKE) TBA TBA Additional hyperscaler support and BYO capabilities may become available in a future release of the CSP.","title":"Cloud requirements"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#instance_metadata","text":"Kubernetes compute nodes will need to have access to the cloud provider's metadata services. This varies by cloud provider and is taken care of automatically by the HPE Cloud Volume CSP. The provided values may be overridden in the StorageClass , see common parameters for more information.","title":"Instance metadata"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#available_regions","text":"The HPE Cloud Volumes CSP may be deployed in the regions where the managed Kubernetes service control planes intersect with the HPE Cloud Volumes Block service. Region EKS Azure Google Americas us-east-1, us-west-2 TBA TBA Europe eu-west-1, eu-west-2 TBA TBA Asia Pacific ap-northest-1 TBA TBA Consider this table a snapshot of a particular moment in time and consult with the respective hyperscalers and the HPE Cloud Volumes Block service for definitive availability. Note In other regions where HPE Cloud Volumes provide services, such as us-west-1, but cloud providers has no managed Kubernetes service; BYO Kubernetes is the only available option when it becomes available as a supported feature of the CSP.","title":"Available regions"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#limitations","text":"Consult the compatibility and support table for generic limitations and requirements. CSI and CSP specific limitations with HPE Cloud Volumes Block is listed below. The Volume Group Provisioner and Volume Group Snapshotter sidecars are currently not implemented in the HPE Cloud Volumes CSP. The base CSI driver parameter description is ignored by the CSP. In some cases, your a \"regionID\" needs to be supplied in the StorageClass and in conjunction with Ephemeral Inline Volumes. Your \"regionID\" may only be found in the APIs. Join us on Slack if you're hitting this issue (it can be seen in the CSP logs). Tip While not a limitation, iSCSI CHAP is mandatory with HPE Cloud Volumes but does not need to be configured within the CSI driver. The CHAP credentials are queried through the REST APIs from the HPE Cloud Volumes account session and applied automatically during runtime.","title":"Limitations"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#storageclass_parameters","text":"A StorageClass is used to provision or clone an HPE Cloud Volumes Block-backed persistent volume. It can also be used to import an existing Cloud Volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. Common parameters for provisioning and cloning Provisioning parameters Pod inline volume parameters (Local Ephemeral Volumes) Cloning parameters Import parameters VolumeSnapshotClass parameters Please see using the HPE CSI Driver for base StorageClass examples. All parameters enumerated reflects the current version and may contain unannounced features and capabilities. Note All parameters are optional unless documented as mandatory for a particular use case.","title":"StorageClass parameters"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#common_parameters_for_provisioning_and_cloning","text":"These parameters are mutable between a parent volume and creating a clone from a snapshot. Parameter String Description destroyOnDelete Boolean Indicates the backing Cloud Volume (including snapshots) should be destroyed when the PVC is deleted. Defaults to \"false\" which means volumes needs to be pruned manually in the Cloud Volume service. limitIops Integer The IOPS limit of the volume. The IOPS limit should be in the range 300 (default) to 20000. performancePolicy 1 Text The name of the performance policy to assign to the volume. Available performance policies: \"Exchange\", \"Oracle\", \"SharePoint\", \"SQL\", \"Windows File Server\". Defaults to \"Other Workloads\". schedule Text Snapshot schedule to assign to the volumes. Available schedules: \"hourly\", \"daily\", \"twicedaily\", \"weekly\", \"monthly\", \"none\". Defaults to \"daily\". retentionPolicy Integer Retention policy to assign to the schedule . The parameter must be paired properly with the schedule . hourly: 6, 12, 24 daily: 3, 7, 14 twicedaily: 4, 8, 14 weekly: 2, 4, 8 monthly: 3, 6, 12 Defaults to \"3\" paired with the \"daily\" retentionPolicy . privateCloud 1 Text Override the compute instance provided VPC/VNET. existingCloudSubnet 1 Text Override the compute instance provided subnet. automatedConnection 1 Boolean Override the HPE Cloud Volumes configured setting for connection automation. Connections between HPE Cloud Volumes and the desired VPC/VNET needs to be provisioned manually if set to \"false\". Restrictions applicable when using the CSI volume mutator : 1 = Parameter is immutable and can't be altered after provisioning/cloning.","title":"Common parameters for provisioning and cloning"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#provisioning_parameters","text":"These parameters are immutable for both volumes and clones once created, clones will inherit parent attributes. Parameter String Description volumeType Text Volume type, General Purpose Flash (\"GPF\") or Premium Flash (\"PF\"). Defaults to \"PF\"","title":"Provisioning parameters"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#pod_inline_volume_parameters_local_ephemeral_volumes","text":"These parameters are applicable only for Pod inline volumes and to be specified within Pod spec. Parameter String Description csi.storage.k8s.io/ephemeral Boolean Indicates that the request is for ephemeral inline volume. This is a mandatory parameter and must be set to \"true\". inline-volume-secret-name Text A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. inline-volume-secret-namespace Text The namespace of inline-volume-secret-name for ephemeral inline volume. size Text The size of ephemeral volume specified in MiB or GiB. If unspecified, a default value will be used. Important All parameters are required for inline ephemeral volumes.","title":"Pod inline volume parameters (Local Ephemeral Volumes)"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#cloning_parameters","text":"Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Cloud Volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the Cloud Volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. replStore Text Name of the Cloud Volume Replication Store to look for volumes, defaults to look outside of Replication Stores","title":"Cloning parameters"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#import_parameters","text":"Importing volumes to Kubernetes requires the source Cloud Volume to be disconnected. Parameter String Description importVolumeName Text The name of the Cloud Volume to import. forceImport Boolean Allows import of volumes created on a different Kubernetes cluster other than the one importing the volume to.","title":"Import parameters"},{"location":"container_storage_provider/hpe_cloud_volumes/index.html#volumesnapshotclass_parameters","text":"These parametes are for VolumeSnapshotClass objects when using CSI snapshots. The external snapshotter needs to be deployed on the Kubernetes cluster and is usually performed by the Kubernetes vendor. Check enabling CSI snapshots for more information. How to use VolumeSnapshotClass and VolumeSnapshot objects is elaborated on in using CSI snapshots . Parameter String Description description Text Text to be added to the snapshot's description in the Cloud Volume service (optional)","title":"VolumeSnapshotClass parameters"},{"location":"csi_driver/index.html","text":"Introduction \u00b6 A Container Storage Interface ( CSI ) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider (CSP) to perform data management operations on storage resources. The architecture of the CSI driver allows block storage vendors to implement a CSP that follows the specification (a browser friendly version ). The CSI driver architecture allows a complete separation of concerns between upstream Kubernetes core, SIG Storage (CSI owners), CSI driver author (HPE) and the backend CSP developer. Tip The HPE CSI Driver for Kubernetes is vendor agnostic. Any entity may leverage the driver and provide their own Container Storage Provider. Table of Contents \u00b6 Introduction Table of Contents Features and Capabilities Compatibility and Support HPE CSI Driver for Kubernetes 2.5.0 HPE CSI Driver for Kubernetes 2.4.2 HPE CSI Driver for Kubernetes 2.4.1 HPE CSI Driver for Kubernetes 2.4.0 Release Archive Known Limitations iSCSI CHAP Considerations Existing PVs and iSCSI sessions CSI driver 2.5.0 and Above Upgrade Considerations Enable iSCSI CHAP CSI driver 1.3.0 to 2.4.2 CSI driver 1.2.1 and Below Kubernetes Feature Gates Kubernetes 1.13 Kubernetes 1.14 Kubernetes 1.15 Kubernetes 1.19 Features and Capabilities \u00b6 CSI gradually mature features and capabilities in the specification at the pace of the community. HPE keep a close watch on differentiating features the primary storage family of products may be suitable for implementing in CSI and Kubernetes. HPE experiment early and often. That's why it's sometimes possible to observe a certain feature being available in the CSI driver although it hasn't been announced or isn't documented. Below is the official table for CSI features we track and deem readily available for use after we've officially tested and validated it in the platform matrix . Feature K8s maturity Since K8s version HPE CSI Driver Dynamic Provisioning Stable 1.13 1.0.0 Volume Expansion Stable 1.24 1.1.0 Volume Snapshots Stable 1.20 1.1.0 PVC Data Source Stable 1.18 1.1.0 Raw Block Volume Stable 1.18 1.2.0 Inline Ephemeral Volumes Beta 1.16 1.2.0 Volume Limits Stable 1.17 1.2.0 Volume Mutator 1 N/A 1.15 1.3.0 Generic Ephemeral Volumes GA 1.23 1.3.0 Volume Groups 1 N/A 1.17 1.4.0 Snapshot Groups 1 N/A 1.17 1.4.0 NFS Server Provisioner 1 N/A 1.17 1.4.0 Volume Encryption 1 N/A 1.18 2.0.0 Basic Topology 3 Stable 1.17 2.5.0 Advanced Topology 3 Stable 1.17 Future Storage Capacity Tracking Stable 1.24 Future Volume Expansion From Source Stable 1.27 Future ReadWriteOncePod Stable 1.29 Future Volume Populator Beta 1.24 Future Volume Health Alpha 1.21 Future Cross Namespace Snapshots Alpha 1.26 Future Upstream Volume Group Snapshot Alpha 1.27 Future Volume Attribute Classes Alpha 1.29 Future 1 = HPE CSI Driver for Kubernetes specific CSI sidecar. CSP support may vary. 2 = Alpha features are enabled by Kubernetes feature gates and are not formally supported by HPE. 3 = Topology information can only be used to describe accessibility relationships between a set of nodes and a single backend using a StorageClass . Depending on the CSP, it may support a number of different snapshotting, cloning and restoring operations by taking advantage of StorageClass parameter overloading. Please see the respective CSP for additional functionality. Refer to the official table of feature gates in the Kubernetes docs to find availability of beta and alpha features. HPE provide limited support on non-GA CSI features. Please file any issues, questions or feature requests here . You may also join our Slack community to chat with HPE folks close to this project. We hang out in #Alletra , #NimbleStorage , #3par-primera and #Kubernetes , sign up at slack.hpedev.io and login at hpedev.slack.com . Tip Familiarize yourself with the basic requirements below for running the CSI driver on your Kubernetes cluster. It's then highly recommended to continue installing the CSI driver with either a Helm chart or an Operator . Compatibility and Support \u00b6 These are the combinations HPE has tested and can provide official support services around for each of the CSI driver releases. Each Container Storage Provider has it's own requirements in terms of storage platform OS and may have other constraints not listed here. Note For Kubernetes 1.12 and earlier please see legacy FlexVolume drivers , do note that the FlexVolume drivers are being deprecated. HPE CSI Driver for Kubernetes 2.5.0 \u00b6 Release highlights: Support for Kubernetes 1.30 and OpenShift 4.16 Introducing CSI Topology support for StorageClasses A \"Node Monitor\" has been added to improve device management Support for attempting automatic filesystem repairs in the event of failed mounts (\"fsRepair\" StorageClass parameter) Improved handling of iSCSI CHAP credentials Added \"nfsNodeSelector\", \"nfsResourceRequestsCpuM\" and \"nfsResourceRequestsMemoryMi\" StorageClass parameters New Helm Chart parameters to control resource requests and limits for node, controller and CSP containers Reworked image handling in the Helm Chart to improve supportability Various improvements in accessMode handling Upgrade considerations: Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Current users of CHAP needs to review the iSCSI CHAP Considerations The importVol parameter has been renamed importVolumeName for HPE Alletra Storage MP and Alletra 9000/Primera/3PAR note HPE CSI Driver v2.5.0 is deployed with v2.5.1 of the Helm chart and Operator Kubernetes 1.27-1.30 1 Helm Chart v2.5.1 on ArtifactHub Operators v2.5.1 on OperatorHub v2.5.1 via OpenShift console Worker OS Red Hat Enterprise Linux 2 7.x, 8.x, 9.x, Red Hat CoreOS 4.14-4.16 Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04 SUSE Linux Enterprise Server 15 SP4, SP5, SP6 and SLE Micro 4 equivalents Platforms 3 Alletra Storage MP 5 10.2.x - 10.4.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocols Fibre Channel, iSCSI Filesystems XFS, ext3/ext4, btrfs, NFSv4 * Release notes v2.5.0 on GitHub Blogs HPE CSI Driver for Kubernetes 2.5.0: Improved stateful workload resilience and robustness * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows \"ReadWriteMany\" PersistentVolumeClaims for volumeMode: Filesystem . 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. While RHEL 7 and its derives will work, the host OS have been EOL'd and support is limited. 3 = Learn about each data platform's team support commitment . 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start. 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage. HPE CSI Driver for Kubernetes 2.4.2 \u00b6 Release highlights: Patch release Kubernetes 1.26-1.29 1 Helm Chart v2.4.2 on ArtifactHub Operators v2.4.2 on OperatorHub v2.4.2 via OpenShift console Worker OS Red Hat Enterprise Linux 2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15 Ubuntu 16.04, 18.04, 20.04, 22.04 SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro 4 equivalents Platforms 3 Alletra Storage MP 5 10.2.x - 10.4.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocols Fibre Channel, iSCSI Filesystems XFS, ext3/ext4, btrfs, NFSv4 * Release notes v2.4.2 on GitHub * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows \"ReadWriteMany\" PersistentVolumeClaims . 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. 3 = Learn about each data platform's team support commitment . 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start. 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage. HPE CSI Driver for Kubernetes 2.4.1 \u00b6 Release highlights: HPE Alletra Storage MP support Kubernetes 1.29 support Full KubeVirt, OpenShift Virtualization and SUSE Harvester support for HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Full ARM64 support for HPE Alletra 5000/6000 and Nimble Storage Support for foreign StorageClasses with the NFS Server Provisioner SUSE Linux Enterprise Micro OS (SLE Micro) support Upgrade considerations: Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Kubernetes 1.26-1.29 1 Helm Chart v2.4.1 on ArtifactHub Operators v2.4.1 on OperatorHub v2.4.1 via OpenShift console Worker OS Red Hat Enterprise Linux 2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15 Ubuntu 16.04, 18.04, 20.04, 22.04 SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro 4 equivalents Platforms 3 Alletra Storage MP 5 10.2.x - 10.3.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocols Fibre Channel, iSCSI Filesystems XFS, ext3/ext4, btrfs, NFSv4 * Release notes v2.4.1 on GitHub Blogs Introducing HPE Alletra Storage MP to HPE CSI Driver for Kubernetes * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows \"ReadWriteMany\" PersistentVolumeClaims . 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. 3 = Learn about each data platform's team support commitment . 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start. 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage. HPE CSI Driver for Kubernetes 2.4.0 \u00b6 Release highlights: Kubernetes 1.27 and 1.28 support KubeVirt and OpenShift Virtualization support for Nimble/Alletra 5000/6000 Enhanced scheduling for the NFS Server Provisioner Multiarch images (Linux ARM64/AMD64) for the CSI driver components and Alletra 9000 CSP Major updates to SIG Storage images Upgrade considerations: Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Kubernetes 1.25-1.28 1 Helm Chart v2.4.0 on ArtifactHub Operators v2.4.0 on OperatorHub v2.4.0 via OpenShift console Worker OS RHEL 2 7.x, 8.x, 9.x, RHCOS 4.12-4.14 Ubuntu 16.04, 18.04, 20.04, 22.04 SLES 15 SP3, SP4, SP5 Platforms 3 Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.1.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocols Fibre Channel, iSCSI Filesystems XFS, ext3/ext4, btrfs, NFSv4 * Release notes v2.4.0 on GitHub Blogs Introduction to new workload paradigms with HPE CSI Driver for Kubernetes * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows \"ReadWriteMany\" PersistentVolumeClaims . 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. 3 = Learn about each data platform's team support commitment . Release Archive \u00b6 HPE currently supports up to three minor releases of the HPE CSI Driver for Kubernetes. Unsupported releases Known Limitations \u00b6 Always check with the Kubernetes vendor distribution which CSI features are available for use and supported by the vendor. When using Kubernetes in virtual machines on VMware vSphere, OpenStack or similiar, iSCSI is the only supported data protocol for the HPE CSI Driver when using block storage. The CSI driver does not support NPIV. Ephemeral, transient or non-persistent Kubernetes nodes are not supported unless the /etc/hpe-storage directory persists across node upgrades or reboots. The path is relocatable using a custom Helm chart or deployment manifest by altering the mountPath parameter for the directory. The CSI driver support a fixed number of volumes per node. Inspect the current limitation by running kubectl get csinodes -o yaml and inspect .spec.drivers.allocatable for \"csi.hpe.com\". The \"count\" element contains how many volumes the node can attach from the HPE CSI Driver (default is 100). The HPE CSI Driver uses host networking for the node driver. Some CNIs have flaky implementations which prevents the CSI driver components to communicate properly. Especially notorious is Flannel on K3s. Use Calico if possible for the widest compatibility. The NFS Server Provisioner and each of the CSPs have known limitations listed separately. iSCSI CHAP Considerations \u00b6 If iSCSI CHAP is being used in the environment, consider the following. Existing PVs and iSCSI sessions \u00b6 It's not recommended to retro fit CHAP into an existing environment where PersistentVolumes are already provisioned and attached. If necessary, all iSCSI sessions needs to be logged out from and the CSI driver Helm chart needs to be installed with cluster-wide iSCSI CHAP credentials for iSCSI CHAP to be effective, otherwise existing non-authenticated sessions will be reused. CSI driver 2.5.0 and Above \u00b6 In 2.5.0 and later the CHAP credentials must be supplied by a separate Secret . The Secret may be supplied when installing the Helm Chart (the Secret must exist prior) or referened in the StorageClass . Upgrade Considerations \u00b6 When using CHAP with 2.4.2 or older the CHAP credentials were provided in clear text in the Helm Chart. To continue to use CHAP for those existing PersistentVolumes , a CHAP Secret needs to be created and referenced in the Helm Chart install. New StorageClasses may reference the same Secret , it's recommended to use a different Secret to distinguish legacy and new PersistentVolumes . Enable iSCSI CHAP \u00b6 How to enable iSCSI CHAP in the current version of the HPE CSI Driver is available in the user documentation . CSI driver 1.3.0 to 2.4.2 \u00b6 CHAP is an optional part of the initial deployment of the driver with parameters passed to Helm or the Operator. For object definitions, the CHAP_USER and CHAP_PASSWORD needs to be supplied to the csi-node-driver . The CHAP username and secret is picked up in the hpenodeinfo Custom Resource Definition (CRD). The CSP is under contract to create the user if it doesn't exist on the backend. CHAP is a good measure to prevent unauthorized access to iSCSI targets, it does not encrypt data on the wire. CHAP secrets should be at least twelve charcters in length. CSI driver 1.2.1 and Below \u00b6 In version 1.2.1 and below, the CSI driver did not support CHAP natively. CHAP must be enabled manually on the worker nodes before deploying the CSI driver on the cluster. This also needs to be applied to new worker nodes before they join the cluster. Kubernetes Feature Gates \u00b6 Different features mature at different rates. Refer to the official table of feature gates in the Kubernetes docs. The following guidelines appliy to which feature gates got introduced as alphas for the corresponding version of Kubernetes. For example, ExpandCSIVolumes got introduced in 1.14 but is still an alpha in 1.15, hence you need to enable that feature gate in 1.15 as well if you want to use it. Kubernetes 1.13 \u00b6 --allow-privileged flag must be set to true for the API server Kubernetes 1.14 \u00b6 --allow-privileged flag must be set to true for the API server --feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true feature gate flags must be set to true for both the API server and kubelet for resize support Kubernetes 1.15 \u00b6 --allow-privileged flag must be set to true for the API server --feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true feature gate flags must be set to true for both the API server and kubelet for resize support --feature-gates=CSIInlineVolume=true feature gate flag must be set to true for both the API server and kubelet for pod inline volumes (Ephemeral Local Volumes) support --feature-gates=VolumePVCDataSource=true feature gate flag must be set to true for both the API server and kubelet for Volume cloning support Kubernetes 1.19 \u00b6 --feature-gates=GenericEphemeralVolume=true feature gate flags needs to be passed to api-server, scheduler, controller-manager and kubelet to enable Generic Ephemeral Volumes","title":"Overview"},{"location":"csi_driver/index.html#introduction","text":"A Container Storage Interface ( CSI ) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider (CSP) to perform data management operations on storage resources. The architecture of the CSI driver allows block storage vendors to implement a CSP that follows the specification (a browser friendly version ). The CSI driver architecture allows a complete separation of concerns between upstream Kubernetes core, SIG Storage (CSI owners), CSI driver author (HPE) and the backend CSP developer. Tip The HPE CSI Driver for Kubernetes is vendor agnostic. Any entity may leverage the driver and provide their own Container Storage Provider.","title":"Introduction"},{"location":"csi_driver/index.html#table_of_contents","text":"Introduction Table of Contents Features and Capabilities Compatibility and Support HPE CSI Driver for Kubernetes 2.5.0 HPE CSI Driver for Kubernetes 2.4.2 HPE CSI Driver for Kubernetes 2.4.1 HPE CSI Driver for Kubernetes 2.4.0 Release Archive Known Limitations iSCSI CHAP Considerations Existing PVs and iSCSI sessions CSI driver 2.5.0 and Above Upgrade Considerations Enable iSCSI CHAP CSI driver 1.3.0 to 2.4.2 CSI driver 1.2.1 and Below Kubernetes Feature Gates Kubernetes 1.13 Kubernetes 1.14 Kubernetes 1.15 Kubernetes 1.19","title":"Table of Contents"},{"location":"csi_driver/index.html#features_and_capabilities","text":"CSI gradually mature features and capabilities in the specification at the pace of the community. HPE keep a close watch on differentiating features the primary storage family of products may be suitable for implementing in CSI and Kubernetes. HPE experiment early and often. That's why it's sometimes possible to observe a certain feature being available in the CSI driver although it hasn't been announced or isn't documented. Below is the official table for CSI features we track and deem readily available for use after we've officially tested and validated it in the platform matrix . Feature K8s maturity Since K8s version HPE CSI Driver Dynamic Provisioning Stable 1.13 1.0.0 Volume Expansion Stable 1.24 1.1.0 Volume Snapshots Stable 1.20 1.1.0 PVC Data Source Stable 1.18 1.1.0 Raw Block Volume Stable 1.18 1.2.0 Inline Ephemeral Volumes Beta 1.16 1.2.0 Volume Limits Stable 1.17 1.2.0 Volume Mutator 1 N/A 1.15 1.3.0 Generic Ephemeral Volumes GA 1.23 1.3.0 Volume Groups 1 N/A 1.17 1.4.0 Snapshot Groups 1 N/A 1.17 1.4.0 NFS Server Provisioner 1 N/A 1.17 1.4.0 Volume Encryption 1 N/A 1.18 2.0.0 Basic Topology 3 Stable 1.17 2.5.0 Advanced Topology 3 Stable 1.17 Future Storage Capacity Tracking Stable 1.24 Future Volume Expansion From Source Stable 1.27 Future ReadWriteOncePod Stable 1.29 Future Volume Populator Beta 1.24 Future Volume Health Alpha 1.21 Future Cross Namespace Snapshots Alpha 1.26 Future Upstream Volume Group Snapshot Alpha 1.27 Future Volume Attribute Classes Alpha 1.29 Future 1 = HPE CSI Driver for Kubernetes specific CSI sidecar. CSP support may vary. 2 = Alpha features are enabled by Kubernetes feature gates and are not formally supported by HPE. 3 = Topology information can only be used to describe accessibility relationships between a set of nodes and a single backend using a StorageClass . Depending on the CSP, it may support a number of different snapshotting, cloning and restoring operations by taking advantage of StorageClass parameter overloading. Please see the respective CSP for additional functionality. Refer to the official table of feature gates in the Kubernetes docs to find availability of beta and alpha features. HPE provide limited support on non-GA CSI features. Please file any issues, questions or feature requests here . You may also join our Slack community to chat with HPE folks close to this project. We hang out in #Alletra , #NimbleStorage , #3par-primera and #Kubernetes , sign up at slack.hpedev.io and login at hpedev.slack.com . Tip Familiarize yourself with the basic requirements below for running the CSI driver on your Kubernetes cluster. It's then highly recommended to continue installing the CSI driver with either a Helm chart or an Operator .","title":"Features and Capabilities"},{"location":"csi_driver/index.html#compatibility_and_support","text":"These are the combinations HPE has tested and can provide official support services around for each of the CSI driver releases. Each Container Storage Provider has it's own requirements in terms of storage platform OS and may have other constraints not listed here. Note For Kubernetes 1.12 and earlier please see legacy FlexVolume drivers , do note that the FlexVolume drivers are being deprecated.","title":"Compatibility and Support"},{"location":"csi_driver/index.html#hpe_csi_driver_for_kubernetes_250","text":"Release highlights: Support for Kubernetes 1.30 and OpenShift 4.16 Introducing CSI Topology support for StorageClasses A \"Node Monitor\" has been added to improve device management Support for attempting automatic filesystem repairs in the event of failed mounts (\"fsRepair\" StorageClass parameter) Improved handling of iSCSI CHAP credentials Added \"nfsNodeSelector\", \"nfsResourceRequestsCpuM\" and \"nfsResourceRequestsMemoryMi\" StorageClass parameters New Helm Chart parameters to control resource requests and limits for node, controller and CSP containers Reworked image handling in the Helm Chart to improve supportability Various improvements in accessMode handling Upgrade considerations: Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Current users of CHAP needs to review the iSCSI CHAP Considerations The importVol parameter has been renamed importVolumeName for HPE Alletra Storage MP and Alletra 9000/Primera/3PAR note HPE CSI Driver v2.5.0 is deployed with v2.5.1 of the Helm chart and Operator Kubernetes 1.27-1.30 1 Helm Chart v2.5.1 on ArtifactHub Operators v2.5.1 on OperatorHub v2.5.1 via OpenShift console Worker OS Red Hat Enterprise Linux 2 7.x, 8.x, 9.x, Red Hat CoreOS 4.14-4.16 Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04 SUSE Linux Enterprise Server 15 SP4, SP5, SP6 and SLE Micro 4 equivalents Platforms 3 Alletra Storage MP 5 10.2.x - 10.4.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocols Fibre Channel, iSCSI Filesystems XFS, ext3/ext4, btrfs, NFSv4 * Release notes v2.5.0 on GitHub Blogs HPE CSI Driver for Kubernetes 2.5.0: Improved stateful workload resilience and robustness * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows \"ReadWriteMany\" PersistentVolumeClaims for volumeMode: Filesystem . 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. While RHEL 7 and its derives will work, the host OS have been EOL'd and support is limited. 3 = Learn about each data platform's team support commitment . 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start. 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.","title":"HPE CSI Driver for Kubernetes 2.5.0"},{"location":"csi_driver/index.html#hpe_csi_driver_for_kubernetes_242","text":"Release highlights: Patch release Kubernetes 1.26-1.29 1 Helm Chart v2.4.2 on ArtifactHub Operators v2.4.2 on OperatorHub v2.4.2 via OpenShift console Worker OS Red Hat Enterprise Linux 2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15 Ubuntu 16.04, 18.04, 20.04, 22.04 SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro 4 equivalents Platforms 3 Alletra Storage MP 5 10.2.x - 10.4.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocols Fibre Channel, iSCSI Filesystems XFS, ext3/ext4, btrfs, NFSv4 * Release notes v2.4.2 on GitHub * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows \"ReadWriteMany\" PersistentVolumeClaims . 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. 3 = Learn about each data platform's team support commitment . 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start. 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.","title":"HPE CSI Driver for Kubernetes 2.4.2"},{"location":"csi_driver/index.html#hpe_csi_driver_for_kubernetes_241","text":"Release highlights: HPE Alletra Storage MP support Kubernetes 1.29 support Full KubeVirt, OpenShift Virtualization and SUSE Harvester support for HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Full ARM64 support for HPE Alletra 5000/6000 and Nimble Storage Support for foreign StorageClasses with the NFS Server Provisioner SUSE Linux Enterprise Micro OS (SLE Micro) support Upgrade considerations: Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Kubernetes 1.26-1.29 1 Helm Chart v2.4.1 on ArtifactHub Operators v2.4.1 on OperatorHub v2.4.1 via OpenShift console Worker OS Red Hat Enterprise Linux 2 7.x, 8.x, 9.x, Red Hat CoreOS 4.12-4.15 Ubuntu 16.04, 18.04, 20.04, 22.04 SUSE Linux Enterprise Server 15 SP3, SP4, SP5 and SLE Micro 4 equivalents Platforms 3 Alletra Storage MP 5 10.2.x - 10.3.x Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.2.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocols Fibre Channel, iSCSI Filesystems XFS, ext3/ext4, btrfs, NFSv4 * Release notes v2.4.1 on GitHub Blogs Introducing HPE Alletra Storage MP to HPE CSI Driver for Kubernetes * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows \"ReadWriteMany\" PersistentVolumeClaims . 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. 3 = Learn about each data platform's team support commitment . 4 = SLE Micro nodes may need to be conformed manually, run transactional-update -n pkg install multipath-tools open-iscsi nfs-client sg3_utils and reboot if the CSI node driver doesn't start. 5 = The HPE CSI Driver for Kubernetes only support HPE Alletra Storage MP when used with HPE GreenLake for Block Storage. Please see the VAST CSI Driver for HPE GreenLake for File Storage.","title":"HPE CSI Driver for Kubernetes 2.4.1"},{"location":"csi_driver/index.html#hpe_csi_driver_for_kubernetes_240","text":"Release highlights: Kubernetes 1.27 and 1.28 support KubeVirt and OpenShift Virtualization support for Nimble/Alletra 5000/6000 Enhanced scheduling for the NFS Server Provisioner Multiarch images (Linux ARM64/AMD64) for the CSI driver components and Alletra 9000 CSP Major updates to SIG Storage images Upgrade considerations: Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Kubernetes 1.25-1.28 1 Helm Chart v2.4.0 on ArtifactHub Operators v2.4.0 on OperatorHub v2.4.0 via OpenShift console Worker OS RHEL 2 7.x, 8.x, 9.x, RHCOS 4.12-4.14 Ubuntu 16.04, 18.04, 20.04, 22.04 SLES 15 SP3, SP4, SP5 Platforms 3 Alletra OS 9000 9.3.x - 9.5.x Alletra OS 5000/6000 6.0.0.x - 6.1.1.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.2.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocols Fibre Channel, iSCSI Filesystems XFS, ext3/ext4, btrfs, NFSv4 * Release notes v2.4.0 on GitHub Blogs Introduction to new workload paradigms with HPE CSI Driver for Kubernetes * = The HPE CSI Driver for Kubernetes is a block storage driver primarily. It includes an NFS Server Provisioner that allows \"ReadWriteMany\" PersistentVolumeClaims . 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. 3 = Learn about each data platform's team support commitment .","title":"HPE CSI Driver for Kubernetes 2.4.0"},{"location":"csi_driver/index.html#release_archive","text":"HPE currently supports up to three minor releases of the HPE CSI Driver for Kubernetes. Unsupported releases","title":"Release Archive"},{"location":"csi_driver/index.html#known_limitations","text":"Always check with the Kubernetes vendor distribution which CSI features are available for use and supported by the vendor. When using Kubernetes in virtual machines on VMware vSphere, OpenStack or similiar, iSCSI is the only supported data protocol for the HPE CSI Driver when using block storage. The CSI driver does not support NPIV. Ephemeral, transient or non-persistent Kubernetes nodes are not supported unless the /etc/hpe-storage directory persists across node upgrades or reboots. The path is relocatable using a custom Helm chart or deployment manifest by altering the mountPath parameter for the directory. The CSI driver support a fixed number of volumes per node. Inspect the current limitation by running kubectl get csinodes -o yaml and inspect .spec.drivers.allocatable for \"csi.hpe.com\". The \"count\" element contains how many volumes the node can attach from the HPE CSI Driver (default is 100). The HPE CSI Driver uses host networking for the node driver. Some CNIs have flaky implementations which prevents the CSI driver components to communicate properly. Especially notorious is Flannel on K3s. Use Calico if possible for the widest compatibility. The NFS Server Provisioner and each of the CSPs have known limitations listed separately.","title":"Known Limitations"},{"location":"csi_driver/index.html#iscsi_chap_considerations","text":"If iSCSI CHAP is being used in the environment, consider the following.","title":"iSCSI CHAP Considerations"},{"location":"csi_driver/index.html#existing_pvs_and_iscsi_sessions","text":"It's not recommended to retro fit CHAP into an existing environment where PersistentVolumes are already provisioned and attached. If necessary, all iSCSI sessions needs to be logged out from and the CSI driver Helm chart needs to be installed with cluster-wide iSCSI CHAP credentials for iSCSI CHAP to be effective, otherwise existing non-authenticated sessions will be reused.","title":"Existing PVs and iSCSI sessions"},{"location":"csi_driver/index.html#csi_driver_250_and_above","text":"In 2.5.0 and later the CHAP credentials must be supplied by a separate Secret . The Secret may be supplied when installing the Helm Chart (the Secret must exist prior) or referened in the StorageClass .","title":"CSI driver 2.5.0 and Above"},{"location":"csi_driver/index.html#upgrade_considerations","text":"When using CHAP with 2.4.2 or older the CHAP credentials were provided in clear text in the Helm Chart. To continue to use CHAP for those existing PersistentVolumes , a CHAP Secret needs to be created and referenced in the Helm Chart install. New StorageClasses may reference the same Secret , it's recommended to use a different Secret to distinguish legacy and new PersistentVolumes .","title":"Upgrade Considerations"},{"location":"csi_driver/index.html#enable_iscsi_chap","text":"How to enable iSCSI CHAP in the current version of the HPE CSI Driver is available in the user documentation .","title":"Enable iSCSI CHAP"},{"location":"csi_driver/index.html#csi_driver_130_to_242","text":"CHAP is an optional part of the initial deployment of the driver with parameters passed to Helm or the Operator. For object definitions, the CHAP_USER and CHAP_PASSWORD needs to be supplied to the csi-node-driver . The CHAP username and secret is picked up in the hpenodeinfo Custom Resource Definition (CRD). The CSP is under contract to create the user if it doesn't exist on the backend. CHAP is a good measure to prevent unauthorized access to iSCSI targets, it does not encrypt data on the wire. CHAP secrets should be at least twelve charcters in length.","title":"CSI driver 1.3.0 to 2.4.2"},{"location":"csi_driver/index.html#csi_driver_121_and_below","text":"In version 1.2.1 and below, the CSI driver did not support CHAP natively. CHAP must be enabled manually on the worker nodes before deploying the CSI driver on the cluster. This also needs to be applied to new worker nodes before they join the cluster.","title":"CSI driver 1.2.1 and Below"},{"location":"csi_driver/index.html#kubernetes_feature_gates","text":"Different features mature at different rates. Refer to the official table of feature gates in the Kubernetes docs. The following guidelines appliy to which feature gates got introduced as alphas for the corresponding version of Kubernetes. For example, ExpandCSIVolumes got introduced in 1.14 but is still an alpha in 1.15, hence you need to enable that feature gate in 1.15 as well if you want to use it.","title":"Kubernetes Feature Gates"},{"location":"csi_driver/index.html#kubernetes_113","text":"--allow-privileged flag must be set to true for the API server","title":"Kubernetes 1.13"},{"location":"csi_driver/index.html#kubernetes_114","text":"--allow-privileged flag must be set to true for the API server --feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true feature gate flags must be set to true for both the API server and kubelet for resize support","title":"Kubernetes 1.14"},{"location":"csi_driver/index.html#kubernetes_115","text":"--allow-privileged flag must be set to true for the API server --feature-gates=ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true feature gate flags must be set to true for both the API server and kubelet for resize support --feature-gates=CSIInlineVolume=true feature gate flag must be set to true for both the API server and kubelet for pod inline volumes (Ephemeral Local Volumes) support --feature-gates=VolumePVCDataSource=true feature gate flag must be set to true for both the API server and kubelet for Volume cloning support","title":"Kubernetes 1.15"},{"location":"csi_driver/index.html#kubernetes_119","text":"--feature-gates=GenericEphemeralVolume=true feature gate flags needs to be passed to api-server, scheduler, controller-manager and kubelet to enable Generic Ephemeral Volumes","title":"Kubernetes 1.19"},{"location":"csi_driver/archive.html","text":"Unsupported Releases \u00b6 HPE supports up to three minor releases. These release are kept here for historic purposes. Unsupported Releases HPE CSI Driver for Kubernetes 2.3.0 HPE CSI Driver for Kubernetes 2.2.0 HPE CSI Driver for Kubernetes 2.1.1 HPE CSI Driver for Kubernetes 2.1.0 HPE CSI Driver for Kubernetes 2.0.0 HPE CSI Driver for Kubernetes 1.4.0 HPE CSI Driver/Operator for Kubernetes 1.3.0 HPE CSI Driver for Kubernetes 1.2.0 HPE CSI Driver for Kubernetes 1.1.1 HPE CSI Driver for Kubernetes 1.1.0 HPE CSI Driver for Kubernetes 1.0.0 HPE CSI Driver for Kubernetes 2.3.0 \u00b6 Release highlights: Introducing HPE Alletra 5000 Security updates Support for Kubernetes 1.25-1.26 and Red Hat OpenShift 4.11-4.12 Support for SLES 15 SP4, RHEL 9.1 and Ubuntu 22.04 Upgrade considerations: Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Kubernetes 1.23-1.26 1 Helm Chart v2.3.0 on ArtifactHub Operators v2.3.0 on OperatorHub v2.3.0 via OpenShift console Worker OS RHEL 2 7.x, 8.x, 9.x, RHCOS 4.10-4.12 Ubuntu 16.04, 18.04, 20.04, 22.04 SLES 15 SP2, SP3, SP4 Platforms 3 Alletra OS 5000/6000 6.0.0.x - 6.1.1.x Alletra OS 9000 9.3.x - 9.5.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.1.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocol Fibre Channel, iSCSI Release notes v2.3.0 on GitHub Blogs Support and security updates for HPE CSI Driver for Kubernetes (release blog) 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. 3 = Learn about each data platform's team support commitment . HPE CSI Driver for Kubernetes 2.2.0 \u00b6 Release highlights: Support for Kubernetes 1.24 and Red Hat OpenShift 4.10 Added Tolerations, Affinity, Labels and Node Selectors to Helm chart Improved automatic recovery for the NFS Server Provisioner Added multipath handling for Alletra 9000, Primera and 3PAR Volume expansion of encrypted volumes Upgrade considerations: Existing encrypted volumes needs to be migrated to allow expansion Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Kubernetes 1.21-1.24 1 Helm Chart v2.2.0 on ArtifactHub Operators v2.2.1 on OperatorHub v2.2.1 via OpenShift console Worker OS RHEL 2 7.x & 8.x, RHCOS 4.8 & 4.10 Ubuntu 16.04, 18.04 & 20.04 SLES 15 SP2 Platforms Alletra OS 6000 6.0.0.x - 6.1.0.x Alletra OS 9000 9.3.x - 9.5.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.0.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocol Fibre Channel, iSCSI Release notes v2.2.0 on GitHub Blogs Updates and Improvements to HPE CSI Driver for Kubernetes (release blog) 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. HPE CSI Driver for Kubernetes 2.1.1 \u00b6 Release highlights: Support for Kubernetes 1.23 Upstream CSI sidecar updates Improved LUN discoverability in certain environments Kubernetes 1.20-1.23 1 Worker OS CentOS and RHEL 7.x & 8.x, RHCOS 4.6 & 4.8, Ubuntu 18.04 & 20.04, SLES 15 SP2 Data protocol Fibre Channel, iSCSI Platforms Alletra OS 6000 6.0.0.x Alletra OS 9000 9.4.x Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x Primera OS 4.3.x, 4.4.x 3PAR OS 3.3.2 Release notes v2.1.1 on GitHub 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. HPE CSI Driver for Kubernetes 2.1.0 \u00b6 Release highlights: Prometheus exporters Support for Red Hat OCP 4.8 Support for Kubernetes 1.22 Reliability/Stability enhancements Peer Persistence Remote Copy enhancements Volume Mutator enhancements Logging enhancements Kubernetes 1.20-1.22 1 Worker OS CentOS and RHEL 7.x & 8.x, RHCOS 4.6 & 4.8, Ubuntu 18.04 & 20.04, SLES 15 SP2 Data protocol Fibre Channel, iSCSI Platforms Alletra OS 6000 6.0.0.x Alletra OS 9000 9.3.x, 9.4.x Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x Primera OS 4.0.x, 4.1.x, 4.2.x, 4.3.x, 4.4.x 3PAR OS 3.3.1, 3.3.2 Release notes v2.1.0 on GitHub Blogs HPE CSI Driver enhancements with monitoring and alerting (release blog) Get started with Prometheus and Grafana and HPE Storage Array Exporter (tutorial) 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. HPE CSI Driver for Kubernetes 2.0.0 \u00b6 Release highlights: Support for HPE Alletra 5000/6000 and 9000 Host-based volume encryption Multitenancy for HPE Alletra 5000/6000 and Nimble Storage Kubernetes 1.18-1.21 1 Worker OS CentOS and RHEL 7.x & 8.x, RHCOS 4.6, Ubuntu 18.04 & 20.04, SLES 15 SP2 Data protocol Fibre Channel, iSCSI Platforms Alletra OS 6000 6.0.0.x Alletra OS 9000 9.3.0 Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x Primera OS 4.0.x, 4.1.x, 4.2.x, 4.3.x 3PAR OS 3.3.1, 3.3.2 Release notes v2.0.0 on GitHub Blogs HPE CSI Driver for Kubernetes now available for HPE Alletra (release blog) Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble (tutorial) Host-based Volume Encryption with HPE CSI Driver for Kubernetes (tutorial) 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. HPE CSI Driver for Kubernetes 1.4.0 \u00b6 Release highlights: Kubernetes CSI Sidecars: Volume Group Provisioner and Volume Group Snapshotter NFS Server Provisioner GA HPE Primera Remote Copy Peer Persistence support Air-gap support for the Helm chart Kubernetes 1.17-1.20 1 Worker OS CentOS and RHEL 7.7 & 8.1, RHCOS 4.4 & 4.6, Ubuntu 18.04 & 20.04, SLES 15 SP1 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.10.0-x, 5.1.4.200-x, 5.2.1.0-x, 5.3.0.0-x, 5.3.1.0-x 3PAR OS 3.3.1+ Primera OS 4.0+ Release notes v1.4.0 on GitHub Blogs HPE CSI Driver for Kubernetes v1.4.0 now available! (release blog) Synchronized Volume Snapshots for Distributed Workloads on Kubernetes (tutorial) 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. HPE CSI Driver/Operator for Kubernetes 1.3.0 \u00b6 Release highlights: Kubernetes CSI Sidecar: Volume Mutator Broader ecosystem support Native iSCSI CHAP configuration Kubernetes 1.15-1.18 1 Worker OS CentOS 7.6, RHEL 7.6, RHCOS 4.3-4.4, Ubuntu 18.04, Ubuntu 20.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x 3PAR OS 3.3.1 Primera OS 4.0.0, 4.1.0, 4.2.0 2 Release notes v1.3.0 on GitHub Blogs Around The Storage Block (release) HPE DEV (Remote copy peer persistence tutorial) HPE DEV (Introducing the volume mutator) 1 = For HPE Ezmeral Container Platform and Rancher; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. 2 = Only FC is supported on Primera OS prior to 4.2.0. HPE CSI Driver for Kubernetes 1.2.0 \u00b6 Release highlights: Support for raw block volumes and inline ephemeral volumes. NFS Server Provisioner in Tech Preview (beta). Kubernetes 1.14-1.18 Worker OS CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.10.x, 5.1.3.1000-x, 5.1.4.200-x, 5.2.1.x 3PAR OS 3.3.1 Primera OS 4.0.0, 4.1.0 (FC only) Release notes v1.2.0 on GitHub Blogs Around The Storage Block (release) HPE DEV (tutorial for raw block and inline volumes) Around The Storage Block (NFS Server Provisioner) HPE DEV (tutorial for NFS) HPE CSI Driver for Kubernetes 1.1.1 \u00b6 Release highlights: Support for HPE 3PAR and Primera Container Storage Provider. Kubernetes 1.13-1.17 Worker OS CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x 3PAR OS 3.3.1 Primera OS 4.0.0, 4.1.0 (FC only) Release notes N/A Blogs HPE Storage Tech Insiders (release), HPE DEV (tutorial for \"primera3par\" CSP) HPE CSI Driver for Kubernetes 1.1.0 \u00b6 Release highlights: Broader ecosystem support, official support for CSI snapshots and volume resize. Kubernetes 1.13-1.17 Worker OS CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x Release notes v1.1.0 on GitHub Blogs HPE Storage Tech Insiders (release), HPE DEV (snapshots, clones, resize) HPE CSI Driver for Kubernetes 1.0.0 \u00b6 Release highlights: Initial GA release with support for Dynamic Provisioning. Kubernetes 1.13-1.17 Worker OS CentOS 7.6, RHEL 7.6, Ubuntu 16.04, Ubuntu 18.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x Release notes v1.0.0 on GitHub Blogs HPE Storage Tech Insiders (release), HPE DEV (architecture and introduction)","title":"Unsupported Releases"},{"location":"csi_driver/archive.html#unsupported_releases","text":"HPE supports up to three minor releases. These release are kept here for historic purposes. Unsupported Releases HPE CSI Driver for Kubernetes 2.3.0 HPE CSI Driver for Kubernetes 2.2.0 HPE CSI Driver for Kubernetes 2.1.1 HPE CSI Driver for Kubernetes 2.1.0 HPE CSI Driver for Kubernetes 2.0.0 HPE CSI Driver for Kubernetes 1.4.0 HPE CSI Driver/Operator for Kubernetes 1.3.0 HPE CSI Driver for Kubernetes 1.2.0 HPE CSI Driver for Kubernetes 1.1.1 HPE CSI Driver for Kubernetes 1.1.0 HPE CSI Driver for Kubernetes 1.0.0","title":"Unsupported Releases"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_230","text":"Release highlights: Introducing HPE Alletra 5000 Security updates Support for Kubernetes 1.25-1.26 and Red Hat OpenShift 4.11-4.12 Support for SLES 15 SP4, RHEL 9.1 and Ubuntu 22.04 Upgrade considerations: Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Kubernetes 1.23-1.26 1 Helm Chart v2.3.0 on ArtifactHub Operators v2.3.0 on OperatorHub v2.3.0 via OpenShift console Worker OS RHEL 2 7.x, 8.x, 9.x, RHCOS 4.10-4.12 Ubuntu 16.04, 18.04, 20.04, 22.04 SLES 15 SP2, SP3, SP4 Platforms 3 Alletra OS 5000/6000 6.0.0.x - 6.1.1.x Alletra OS 9000 9.3.x - 9.5.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.1.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocol Fibre Channel, iSCSI Release notes v2.3.0 on GitHub Blogs Support and security updates for HPE CSI Driver for Kubernetes (release blog) 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. Lowest tested and known working version is Kubernetes 1.21. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. 3 = Learn about each data platform's team support commitment .","title":"HPE CSI Driver for Kubernetes 2.3.0"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_220","text":"Release highlights: Support for Kubernetes 1.24 and Red Hat OpenShift 4.10 Added Tolerations, Affinity, Labels and Node Selectors to Helm chart Improved automatic recovery for the NFS Server Provisioner Added multipath handling for Alletra 9000, Primera and 3PAR Volume expansion of encrypted volumes Upgrade considerations: Existing encrypted volumes needs to be migrated to allow expansion Existing claims provisioned with the NFS Server Provisioner needs to be upgraded . Kubernetes 1.21-1.24 1 Helm Chart v2.2.0 on ArtifactHub Operators v2.2.1 on OperatorHub v2.2.1 via OpenShift console Worker OS RHEL 2 7.x & 8.x, RHCOS 4.8 & 4.10 Ubuntu 16.04, 18.04 & 20.04 SLES 15 SP2 Platforms Alletra OS 6000 6.0.0.x - 6.1.0.x Alletra OS 9000 9.3.x - 9.5.x Nimble OS 5.0.10.x, 5.2.1.x, 6.0.0.x, 6.1.0.x Primera OS 4.3.x - 4.5.x 3PAR OS 3.3.x Data protocol Fibre Channel, iSCSI Release notes v2.2.0 on GitHub Blogs Updates and Improvements to HPE CSI Driver for Kubernetes (release blog) 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE.","title":"HPE CSI Driver for Kubernetes 2.2.0"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_211","text":"Release highlights: Support for Kubernetes 1.23 Upstream CSI sidecar updates Improved LUN discoverability in certain environments Kubernetes 1.20-1.23 1 Worker OS CentOS and RHEL 7.x & 8.x, RHCOS 4.6 & 4.8, Ubuntu 18.04 & 20.04, SLES 15 SP2 Data protocol Fibre Channel, iSCSI Platforms Alletra OS 6000 6.0.0.x Alletra OS 9000 9.4.x Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x Primera OS 4.3.x, 4.4.x 3PAR OS 3.3.2 Release notes v2.1.1 on GitHub 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations.","title":"HPE CSI Driver for Kubernetes 2.1.1"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_210","text":"Release highlights: Prometheus exporters Support for Red Hat OCP 4.8 Support for Kubernetes 1.22 Reliability/Stability enhancements Peer Persistence Remote Copy enhancements Volume Mutator enhancements Logging enhancements Kubernetes 1.20-1.22 1 Worker OS CentOS and RHEL 7.x & 8.x, RHCOS 4.6 & 4.8, Ubuntu 18.04 & 20.04, SLES 15 SP2 Data protocol Fibre Channel, iSCSI Platforms Alletra OS 6000 6.0.0.x Alletra OS 9000 9.3.x, 9.4.x Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x Primera OS 4.0.x, 4.1.x, 4.2.x, 4.3.x, 4.4.x 3PAR OS 3.3.1, 3.3.2 Release notes v2.1.0 on GitHub Blogs HPE CSI Driver enhancements with monitoring and alerting (release blog) Get started with Prometheus and Grafana and HPE Storage Array Exporter (tutorial) 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations.","title":"HPE CSI Driver for Kubernetes 2.1.0"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_200","text":"Release highlights: Support for HPE Alletra 5000/6000 and 9000 Host-based volume encryption Multitenancy for HPE Alletra 5000/6000 and Nimble Storage Kubernetes 1.18-1.21 1 Worker OS CentOS and RHEL 7.x & 8.x, RHCOS 4.6, Ubuntu 18.04 & 20.04, SLES 15 SP2 Data protocol Fibre Channel, iSCSI Platforms Alletra OS 6000 6.0.0.x Alletra OS 9000 9.3.0 Nimble OS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x, 5.3.1.x, 6.0.0.x Primera OS 4.0.x, 4.1.x, 4.2.x, 4.3.x 3PAR OS 3.3.1, 3.3.2 Release notes v2.0.0 on GitHub Blogs HPE CSI Driver for Kubernetes now available for HPE Alletra (release blog) Multitenancy for Kubernetes clusters using HPE Alletra 5000/6000 and Nimble (tutorial) Host-based Volume Encryption with HPE CSI Driver for Kubernetes (tutorial) 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations.","title":"HPE CSI Driver for Kubernetes 2.0.0"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_140","text":"Release highlights: Kubernetes CSI Sidecars: Volume Group Provisioner and Volume Group Snapshotter NFS Server Provisioner GA HPE Primera Remote Copy Peer Persistence support Air-gap support for the Helm chart Kubernetes 1.17-1.20 1 Worker OS CentOS and RHEL 7.7 & 8.1, RHCOS 4.4 & 4.6, Ubuntu 18.04 & 20.04, SLES 15 SP1 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.10.0-x, 5.1.4.200-x, 5.2.1.0-x, 5.3.0.0-x, 5.3.1.0-x 3PAR OS 3.3.1+ Primera OS 4.0+ Release notes v1.4.0 on GitHub Blogs HPE CSI Driver for Kubernetes v1.4.0 now available! (release blog) Synchronized Volume Snapshots for Distributed Workloads on Kubernetes (tutorial) 1 = For HPE Ezmeral Runtime Enterprise, Rancher and Mirantis Kubernetes Engine; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations.","title":"HPE CSI Driver for Kubernetes 1.4.0"},{"location":"csi_driver/archive.html#hpe_csi_driveroperator_for_kubernetes_130","text":"Release highlights: Kubernetes CSI Sidecar: Volume Mutator Broader ecosystem support Native iSCSI CHAP configuration Kubernetes 1.15-1.18 1 Worker OS CentOS 7.6, RHEL 7.6, RHCOS 4.3-4.4, Ubuntu 18.04, Ubuntu 20.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.10.x, 5.1.4.200-x, 5.2.1.x, 5.3.0.x 3PAR OS 3.3.1 Primera OS 4.0.0, 4.1.0, 4.2.0 2 Release notes v1.3.0 on GitHub Blogs Around The Storage Block (release) HPE DEV (Remote copy peer persistence tutorial) HPE DEV (Introducing the volume mutator) 1 = For HPE Ezmeral Container Platform and Rancher; Kubernetes clusters must be deployed within the currently supported range of \"Worker OS\" platforms listed in the above table. See partner ecosystems for other variations. 2 = Only FC is supported on Primera OS prior to 4.2.0.","title":"HPE CSI Driver/Operator for Kubernetes 1.3.0"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_120","text":"Release highlights: Support for raw block volumes and inline ephemeral volumes. NFS Server Provisioner in Tech Preview (beta). Kubernetes 1.14-1.18 Worker OS CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.10.x, 5.1.3.1000-x, 5.1.4.200-x, 5.2.1.x 3PAR OS 3.3.1 Primera OS 4.0.0, 4.1.0 (FC only) Release notes v1.2.0 on GitHub Blogs Around The Storage Block (release) HPE DEV (tutorial for raw block and inline volumes) Around The Storage Block (NFS Server Provisioner) HPE DEV (tutorial for NFS)","title":"HPE CSI Driver for Kubernetes 1.2.0"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_111","text":"Release highlights: Support for HPE 3PAR and Primera Container Storage Provider. Kubernetes 1.13-1.17 Worker OS CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x 3PAR OS 3.3.1 Primera OS 4.0.0, 4.1.0 (FC only) Release notes N/A Blogs HPE Storage Tech Insiders (release), HPE DEV (tutorial for \"primera3par\" CSP)","title":"HPE CSI Driver for Kubernetes 1.1.1"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_110","text":"Release highlights: Broader ecosystem support, official support for CSI snapshots and volume resize. Kubernetes 1.13-1.17 Worker OS CentOS 7.6, RHEL 7.6, RHCOS 4.2-4.3, Ubuntu 16.04, Ubuntu 18.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x Release notes v1.1.0 on GitHub Blogs HPE Storage Tech Insiders (release), HPE DEV (snapshots, clones, resize)","title":"HPE CSI Driver for Kubernetes 1.1.0"},{"location":"csi_driver/archive.html#hpe_csi_driver_for_kubernetes_100","text":"Release highlights: Initial GA release with support for Dynamic Provisioning. Kubernetes 1.13-1.17 Worker OS CentOS 7.6, RHEL 7.6, Ubuntu 16.04, Ubuntu 18.04 Data protocol Fibre Channel, iSCSI Platforms NimbleOS 5.0.8.x, 5.1.3.x, 5.1.4.x Release notes v1.0.0 on GitHub Blogs HPE Storage Tech Insiders (release), HPE DEV (architecture and introduction)","title":"HPE CSI Driver for Kubernetes 1.0.0"},{"location":"csi_driver/deployment.html","text":"Overview \u00b6 The HPE CSI Driver is deployed by using industry standard means, either a Helm chart or an Operator. An \"advanced install\" from object configuration files is provided as reference for partners, OEMs and users wanting to perform customizations and their own packaging or deployment methodologies. Overview Delivery Vehicles Need Help Deciding? Helm Helm for Air-gapped Environments Version 2.5.0 and newer Operator Red Hat OpenShift Container Platform Upstream Kubernetes and Others Add an HPE Storage Backend Secret Parameters Configuring Additional Storage Backends Create a StorageClass with the Custom Secret Advanced Install Manual CSI Driver Install Common Advanced Uninstall Downgrading the CSI driver Delivery Vehicles \u00b6 As different methods of installation are provided, it might not be too obvious which delivery vehicle is the right one. Need Help Deciding? \u00b6 I have a... Then you need... Vanilla upstream Kubernetes cluster on a supported host OS. The Helm chart Red Hat OpenShift 4.x cluster. The certified CSI operator for OpenShift Supported environment with multiple backends. Helm chart with additional Secrets and StorageClasses HPE Ezmeral Runtime Enterprise environment. The Helm chart Operator Life-cycle Manager (OLM) environment. The CSI operator Unsupported host OS/Kubernetes cluster and like to tinker. The advanced install Supported platform in an air-gapped environment The Helm chart using the air-gapped procedure Undecided? If it's not clear what you should use for your environment, the Helm chart is most likely the correct answer. Helm \u00b6 Helm is the package manager for Kubernetes. Software is being delivered in a format designated as a \"chart\". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG file. The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on Artifact Hub . The chart only supports Helm 3 from version 1.3.0 of the HPE CSI Driver. In an effort to avoid duplicate documentation, please see the chart for instructions on how to deploy the CSI driver using Helm. Go to the chart on Artifact Hub . Helm for Air-gapped Environments \u00b6 In the event of deploying the HPE CSI Driver in a secure air-gapped environment, Helm is the recommended method. For sake of completeness, it's also possible to follow the advanced install procedures and replace \"quay.io\" in the deployment manifests with the internal private registry location. Establish a working directory on a bastion Linux host that has HTTP access to the Internet, the private registry and the Kubernetes cluster where the CSI driver needs to be installed. The bastion host is assumed to have the docker , helm and curl command installed. It's also assumed throughout that the user executing docker has logged in to the private registry and that pulling images from the private registry is allowed anonymously by the Kubernetes compute nodes. Note Only the HPE CSI Driver 1.4.0 and later is supported using this methodology. Create a working directory and set environment variables referenced throughout the procedure. In this example, we'll use HPE CSI Driver v2.5.0 on Kubernetes 1.30. Available versions are found in the co-deployments GitHub repo . mkdir hpe-csi-driver cd hpe-csi-driver export MY_REGISTRY=registry.enterprise.example.com export MY_CSI_DRIVER=2.5.0 export MY_K8S=1.30 Next, create a list with the CSI driver images. Copy and paste the entire text blob in one chunk. curl -s https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/hpe-csi-k8s-${MY_K8S}.yaml \\ https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/nimble-csp.yaml \\ https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/3par-primera-csp.yaml \\ | grep image: | awk '{print $2}' | sort | uniq > images echo quay.io/hpestorage/nfs-provisioner:v3.0.5 >> images Important In HPE CSI Driver 2.4.2 and earlier the NFS Server Provisioner image is not automatically pulled from the private registry once installed. Use the \"nfsProvisionerImage\" parameter in the StorageClass . The above command should not output anything. A list of images should be in the file \"images\". Pull, tag and push the images to the private registry. cat images | xargs -n 1 docker pull awk '{ print $1\" \"$1 }' images | sed -E -e \"s/ quay.io| registry.k8s.io/ ${MY_REGISTRY}/\" | xargs -n 2 docker tag sed -E -e \"s/quay.io|registry.k8s.io/${MY_REGISTRY}/\" images | xargs -n 1 docker push Tip Depending on what kind of private registry being used, the base repositories hpestorage and sig-storage might need to be created and given write access to the user pushing the images. Next, install the chart as normal with the additional registry parameter. This is an example, please refer to the Helm chart documentation on ArtifactHub. helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/ kubectl create ns hpe-storage Version 2.4.2 or earlier. helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage --version ${MY_CSI_DRIVER} --set registry=${MY_REGISTRY} Version 2.5.0 or newer, skip to \u2192 Version 2.5.0 and newer . Note If the client running helm is in the air-gapped environment as well, the docs directory needs to be hosted on a web server in the air-gapped environment, and then use helm repo add hpe-storage https://my-web-server.internal/docs above instead. Version 2.5.0 and newer \u00b6 In version 2.5.0 and onwards, all images used by the HPE CSI Driver for Kubernetes Helm Chart are parameterized individually with the fully qualified URL. Use the procedure above to mirror the images to an internal registry. Once mirrored, replace the registry names in the reference values.yaml file. curl -s https://raw.githubusercontent.com/hpe-storage/co-deployments/master/helm/values/csi-driver/v${MY_CSI_DRIVER}/values.yaml | sed -E -e \"s/ quay.io| registry.k8s.io/ ${MY_REGISTRY}/g\" > my-values.yaml Use the my-values.yaml file to install the Helm Chart. helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver \\ -n hpe-storage --version ${MY_CSI_DRIVER} \\ -f my-values.yaml Operator \u00b6 The Operator pattern is based on the idea that software should be instantiated and run with a set of custom controllers in Kubernetes. It creates a native experience for any software running on Kubernetes. The official HPE CSI Operator for Kubernetes is hosted on OperatorHub.io . The CSI Operator images are hosted both on quay.io and officially certified containers in the Red Hat Ecosystem Catalog. Red Hat OpenShift Container Platform \u00b6 The HPE CSI Operator for Kubernetes is a fully certified Operator for OpenShift. There are a few tweaks needed and there's a separate section for OpenShift. See Red Hat OpenShift in the partner ecosystem section Upstream Kubernetes and Others \u00b6 Follow the documentation from the respective upstream distributions on how to deploy an Operator. In most cases, the Operator Lifecyle Manager (OLM) needs to be installed separately (does NOT apply to OpenShift 4 and later). Visit the documentation in the OLM GitHub repo to learn how to install OLM. Once OLM is operational, install the HPE CSI Operator. kubectl create -f https://operatorhub.io/install/hpe-csi-operator.yaml The Operator will be installed in my-hpe-csi-operator namespace. Watch it come up by inspecting the ClusterServiceVersion (CSV). kubectl get csv -n my-hpe-csi-operator Next, a HPECSIDriver object needs to be instantiated. Create a file named hpe-csi-operator.yaml , edit and apply (or copy the command from the top of the content). HPE CSI Operator v2.5.1 # kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableHostDeletion: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false disableNodeMonitor: false imagePullPolicy: IfNotPresent images: csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1 csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0 csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7 csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0 csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1 csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1 csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1 csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1 csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6 csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6 csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6 nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5 nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0 primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0 iscsi: chapSecretName: \"\" kubeletRootDir: /var/lib/kubelet logLevel: info node: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] v2.4.2 # kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false imagePullPolicy: IfNotPresent iscsi: chapPassword: \"\" chapUser: \"\" kubeletRootDir: /var/lib/kubelet/ logLevel: info node: affinity: {} labels: {} nodeSelector: {} tolerations: [] registry: quay.io v2.4.1 # kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false imagePullPolicy: IfNotPresent iscsi: chapPassword: \"\" chapUser: \"\" kubeletRootDir: /var/lib/kubelet/ logLevel: info node: affinity: {} labels: {} nodeSelector: {} tolerations: [] registry: quay.io Tip The contents depends on which version of the CSI driver is installed. Please visit OperatorHub or ArtifactHub for more details. The CSI driver is now ready for use. Proceed to the next section to learn about adding an HPE storage backend . Add an HPE Storage Backend \u00b6 Once the CSI driver is deployed, two additional objects needs to be created to get started with dynamic provisioning of persistent storage, a Secret and a StorageClass . Tip Naming the Secret and StorageClass is entirely up to the user, however, to keep up with the examples on SCOD, it's highly recommended to use the names illustrated here. Secret Parameters \u00b6 All parameters are mandatory and described below. Parameter Description serviceName This hostname or IP address where the Container Storage Provider (CSP) is running, usually a Kubernetes Service , such as \"alletra6000-csp-svc\" or \"alletra9000-csp-svc\" servicePort This is port the serviceName is listening to. backend This is the management hostname or IP address of the actual backend storage system, such as an Alletra 5000/6000 or 9000 array. username Backend storage system username with the correct privileges to perform storage management. password Backend storage system password. Example: HPE Alletra Storage MP apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: alletrastoragemp-csp-svc servicePort: \"8080\" backend: 10.10.0.20 username: 3paradm password: 3pardata HPE Alletra 5000/6000 apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: alletra6000-csp-svc servicePort: \"8080\" backend: 192.168.1.110 username: admin password: admin HPE Alletra 9000 apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: alletra9000-csp-svc servicePort: \"8080\" backend: 10.10.0.20 username: 3paradm password: 3pardata HPE Nimble Storage apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: nimble-csp-svc servicePort: \"8080\" backend: 192.168.1.2 username: admin password: admin HPE Primera and 3PAR apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: primera3par-csp-svc servicePort: \"8080\" backend: 10.10.0.2 username: 3paradm password: 3pardata Create the Secret using kubectl : kubectl create -f secret.yaml Tip In a real world scenario it's more practical to name the Secret something that makes sense for the organization. It could be the hostname of the backend or the role it carries, i.e \"hpe-alletra-sanjose-prod\". Next step involves creating a default StorageClass . Configuring Additional Storage Backends \u00b6 It's not uncommon to have multiple HPE primary storage systems within the same environment, either the same family or different ones. This section walks through the scenario of managing multiple StorageClass and Secret API objects to represent an environment with multiple systems. There's a brief tutorial available in the Video Gallery that walks through these steps. Note Make note of the Kubernetes Namespace or OpenShift project name used during the deployment. In the following examples, we will be using the \"hpe-storage\" Namespace . To view the current Secrets in the \"hpe-storage\" Namespace (assuming default names): kubectl -n hpe-storage get secret/hpe-backend NAME TYPE DATA AGE hpe-backend Opaque 5 2m This Secret is used by the CSI sidecars in the StorageClass to authenticate to a specific backend for CSI operations. In order to add a new Secret or manage access to multiple backends, additional Secrets will need to be created per backend. Secret Requirements Each Secret name must be unique. servicePort should be set to 8080 . To create a new Secret , specify the name, Namespace , backend username, backend password and the backend IP address to be used by the CSP and save it as custom-secret.yaml (a detailed description of the parameters are available above ). HPE Alletra Storage MP apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: alletrastoragemp-csp-svc servicePort: \"8080\" backend: 10.10.0.20 username: 3paradm password: 3pardata HPE Alletra 5000/6000 apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: alletra6000-csp-svc servicePort: \"8080\" backend: 192.168.1.110 username: admin password: admin HPE Alletra 9000 apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: alletra9000-csp-svc servicePort: \"8080\" backend: 10.10.0.20 username: 3paradm password: 3pardata HPE Nimble Storage apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: nimble-csp-svc servicePort: \"8080\" backend: 192.168.1.2 username: admin password: admin HPE Primera and 3PAR apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: primera3par-csp-svc servicePort: \"8080\" backend: 10.10.0.2 username: 3paradm password: 3pardata Create the Secret using kubectl : kubectl create -f custom-secret.yaml You should now see the Secret in the \"hpe-storage\" Namespace : kubectl -n hpe-storage get secret/custom-secret NAME TYPE DATA AGE custom-secret Opaque 5 1m Create a StorageClass with the Custom Secret \u00b6 To use the new Secret \"custom-secret\", create a new StorageClass using the Secret and the necessary StorageClass parameters. Please see the requirements section of the respective CSP . K8s 1.15+ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-custom provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: custom-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by using a custom Secret with the HPE CSI Driver for Kubernetes\" reclaimPolicy: Delete allowVolumeExpansion: true K8s 1.14 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-custom provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/resizer-secret-name: custom-secret csi.storage.k8s.io/resizer-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by using a custom Secret with the HPE CSI Driver for Kubernetes\" reclaimPolicy: Delete allowVolumeExpansion: true Note Don't forget to call out the StorageClass explicitly when creating PVCs from non-default StorageClasses . Next, Create a PersistentVolumeClaim from a StorageClass . Advanced Install \u00b6 This guide is primarily written to accommodate a highly manual installation on upstream Kubernetes or partner OEMs engaged with HPE to bundle the HPE CSI Driver in a custom distribution. Installation steps may vary for different vendors and flavors of Kubernetes. The following example walks through deployment of the latest CSI driver. Critical It's highly recommended to use either the Helm chart or Operator to install the HPE CSI Driver for Kubernetes and the associated Container Storage Providers. Only venture down manual installation if your requirements can't be met by the Helm chart or Operator . Manual CSI Driver Install \u00b6 Deploy the CSI driver and sidecars for the relevant Kubernetes version. Uninstalling the CSI driver when installed manually The manifests below create a number of objects, including CustomResourceDefinitions (CRDs) which may hold critical information about storage resources. Simply deleting the below manifests in order to uninstall the CSI driver may render PersistentVolumes unusable. Common \u00b6 These object configuration files are common for all versions of Kubernetes. All components below are deployed in the \"hpe-storage\" Namespace . kubectl create ns hpe-storage Worker node IO settings and common CRDs : kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-linux-config.yaml kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-volumegroup-snapshotgroup-crds.yaml Container Storage Provider: HPE Alletra 5000/6000 and Nimble Storage kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/nimble-csp.yaml HPE Alletra Storage MP, HPE Alletra 9000, Primera and 3PAR kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-csp.yaml kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-crd.yaml Important The above instructions assumes you have an array with a supported platform OS installed. Please see the requirements section of the respective CSP . Install the CSI driver: Kubernetes 1.29 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.29.yaml Kubernetes 1.28 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.28.yaml Kubernetes 1.27 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.27.yaml Kubernetes 1.26 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.26.yaml Seealso Older and unsupported versions of Kubernetes and the CSI driver are archived on this page . Depending on which version is being deployed, different API objects gets created. Next step: Add an HPE Storage Backend . Advanced Uninstall \u00b6 The following steps outline how to uninstall the CSI driver that has been deployed using the Advanced Install above. Uninstall Worker node settings: kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-linux-config.yaml Uninstall relevant Container Storage Provider: HPE Alletra 5000/6000 and Nimble Storage kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/nimble-csp.yaml HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-csp.yaml HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR users If you are reinstalling the HPE CSI Driver, DO NOT remove the crd/hpevolumeinfos.storage.hpe.com resource. This CustomResourceDefinition contains important volume metadata used by the HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR CSP. HPE CSI Driver v2.0.0 and below share the same YAML file for crds and CSP and would require a manual removal of the individual Service and Deployment in the \"hpe-storage\" Namespace . Uninstall the CSI driver: Kubernetes 1.29 kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.29.yaml Kubernetes 1.28 kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.28.yaml Kubernetes 1.27 kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.27.yaml Kubernetes 1.26 kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.26.yaml If no longer needed, delete the \"hpe-storage\" Namespace . kubectl delete ns hpe-storage Downgrading the CSI driver \u00b6 Downgrading the CSI driver is currently not supported. It will work between certain minor versions. HPE does not test or document procedures to downgrade between incompatible versions.","title":"Deployment"},{"location":"csi_driver/deployment.html#overview","text":"The HPE CSI Driver is deployed by using industry standard means, either a Helm chart or an Operator. An \"advanced install\" from object configuration files is provided as reference for partners, OEMs and users wanting to perform customizations and their own packaging or deployment methodologies. Overview Delivery Vehicles Need Help Deciding? Helm Helm for Air-gapped Environments Version 2.5.0 and newer Operator Red Hat OpenShift Container Platform Upstream Kubernetes and Others Add an HPE Storage Backend Secret Parameters Configuring Additional Storage Backends Create a StorageClass with the Custom Secret Advanced Install Manual CSI Driver Install Common Advanced Uninstall Downgrading the CSI driver","title":"Overview"},{"location":"csi_driver/deployment.html#delivery_vehicles","text":"As different methods of installation are provided, it might not be too obvious which delivery vehicle is the right one.","title":"Delivery Vehicles"},{"location":"csi_driver/deployment.html#need_help_deciding","text":"I have a... Then you need... Vanilla upstream Kubernetes cluster on a supported host OS. The Helm chart Red Hat OpenShift 4.x cluster. The certified CSI operator for OpenShift Supported environment with multiple backends. Helm chart with additional Secrets and StorageClasses HPE Ezmeral Runtime Enterprise environment. The Helm chart Operator Life-cycle Manager (OLM) environment. The CSI operator Unsupported host OS/Kubernetes cluster and like to tinker. The advanced install Supported platform in an air-gapped environment The Helm chart using the air-gapped procedure Undecided? If it's not clear what you should use for your environment, the Helm chart is most likely the correct answer.","title":"Need Help Deciding?"},{"location":"csi_driver/deployment.html#helm","text":"Helm is the package manager for Kubernetes. Software is being delivered in a format designated as a \"chart\". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG file. The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on Artifact Hub . The chart only supports Helm 3 from version 1.3.0 of the HPE CSI Driver. In an effort to avoid duplicate documentation, please see the chart for instructions on how to deploy the CSI driver using Helm. Go to the chart on Artifact Hub .","title":"Helm"},{"location":"csi_driver/deployment.html#helm_for_air-gapped_environments","text":"In the event of deploying the HPE CSI Driver in a secure air-gapped environment, Helm is the recommended method. For sake of completeness, it's also possible to follow the advanced install procedures and replace \"quay.io\" in the deployment manifests with the internal private registry location. Establish a working directory on a bastion Linux host that has HTTP access to the Internet, the private registry and the Kubernetes cluster where the CSI driver needs to be installed. The bastion host is assumed to have the docker , helm and curl command installed. It's also assumed throughout that the user executing docker has logged in to the private registry and that pulling images from the private registry is allowed anonymously by the Kubernetes compute nodes. Note Only the HPE CSI Driver 1.4.0 and later is supported using this methodology. Create a working directory and set environment variables referenced throughout the procedure. In this example, we'll use HPE CSI Driver v2.5.0 on Kubernetes 1.30. Available versions are found in the co-deployments GitHub repo . mkdir hpe-csi-driver cd hpe-csi-driver export MY_REGISTRY=registry.enterprise.example.com export MY_CSI_DRIVER=2.5.0 export MY_K8S=1.30 Next, create a list with the CSI driver images. Copy and paste the entire text blob in one chunk. curl -s https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/hpe-csi-k8s-${MY_K8S}.yaml \\ https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/nimble-csp.yaml \\ https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v${MY_CSI_DRIVER}/3par-primera-csp.yaml \\ | grep image: | awk '{print $2}' | sort | uniq > images echo quay.io/hpestorage/nfs-provisioner:v3.0.5 >> images Important In HPE CSI Driver 2.4.2 and earlier the NFS Server Provisioner image is not automatically pulled from the private registry once installed. Use the \"nfsProvisionerImage\" parameter in the StorageClass . The above command should not output anything. A list of images should be in the file \"images\". Pull, tag and push the images to the private registry. cat images | xargs -n 1 docker pull awk '{ print $1\" \"$1 }' images | sed -E -e \"s/ quay.io| registry.k8s.io/ ${MY_REGISTRY}/\" | xargs -n 2 docker tag sed -E -e \"s/quay.io|registry.k8s.io/${MY_REGISTRY}/\" images | xargs -n 1 docker push Tip Depending on what kind of private registry being used, the base repositories hpestorage and sig-storage might need to be created and given write access to the user pushing the images. Next, install the chart as normal with the additional registry parameter. This is an example, please refer to the Helm chart documentation on ArtifactHub. helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/ kubectl create ns hpe-storage Version 2.4.2 or earlier. helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage --version ${MY_CSI_DRIVER} --set registry=${MY_REGISTRY} Version 2.5.0 or newer, skip to \u2192 Version 2.5.0 and newer . Note If the client running helm is in the air-gapped environment as well, the docs directory needs to be hosted on a web server in the air-gapped environment, and then use helm repo add hpe-storage https://my-web-server.internal/docs above instead.","title":"Helm for Air-gapped Environments"},{"location":"csi_driver/deployment.html#version_250_and_newer","text":"In version 2.5.0 and onwards, all images used by the HPE CSI Driver for Kubernetes Helm Chart are parameterized individually with the fully qualified URL. Use the procedure above to mirror the images to an internal registry. Once mirrored, replace the registry names in the reference values.yaml file. curl -s https://raw.githubusercontent.com/hpe-storage/co-deployments/master/helm/values/csi-driver/v${MY_CSI_DRIVER}/values.yaml | sed -E -e \"s/ quay.io| registry.k8s.io/ ${MY_REGISTRY}/g\" > my-values.yaml Use the my-values.yaml file to install the Helm Chart. helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver \\ -n hpe-storage --version ${MY_CSI_DRIVER} \\ -f my-values.yaml","title":"Version 2.5.0 and newer"},{"location":"csi_driver/deployment.html#operator","text":"The Operator pattern is based on the idea that software should be instantiated and run with a set of custom controllers in Kubernetes. It creates a native experience for any software running on Kubernetes. The official HPE CSI Operator for Kubernetes is hosted on OperatorHub.io . The CSI Operator images are hosted both on quay.io and officially certified containers in the Red Hat Ecosystem Catalog.","title":"Operator"},{"location":"csi_driver/deployment.html#red_hat_openshift_container_platform","text":"The HPE CSI Operator for Kubernetes is a fully certified Operator for OpenShift. There are a few tweaks needed and there's a separate section for OpenShift. See Red Hat OpenShift in the partner ecosystem section","title":"Red Hat OpenShift Container Platform"},{"location":"csi_driver/deployment.html#upstream_kubernetes_and_others","text":"Follow the documentation from the respective upstream distributions on how to deploy an Operator. In most cases, the Operator Lifecyle Manager (OLM) needs to be installed separately (does NOT apply to OpenShift 4 and later). Visit the documentation in the OLM GitHub repo to learn how to install OLM. Once OLM is operational, install the HPE CSI Operator. kubectl create -f https://operatorhub.io/install/hpe-csi-operator.yaml The Operator will be installed in my-hpe-csi-operator namespace. Watch it come up by inspecting the ClusterServiceVersion (CSV). kubectl get csv -n my-hpe-csi-operator Next, a HPECSIDriver object needs to be instantiated. Create a file named hpe-csi-operator.yaml , edit and apply (or copy the command from the top of the content). HPE CSI Operator v2.5.1 # kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableHostDeletion: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false disableNodeMonitor: false imagePullPolicy: IfNotPresent images: csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1 csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0 csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7 csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0 csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1 csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1 csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1 csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1 csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6 csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6 csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6 nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5 nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0 primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0 iscsi: chapSecretName: \"\" kubeletRootDir: /var/lib/kubelet logLevel: info node: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] v2.4.2 # kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false imagePullPolicy: IfNotPresent iscsi: chapPassword: \"\" chapUser: \"\" kubeletRootDir: /var/lib/kubelet/ logLevel: info node: affinity: {} labels: {} nodeSelector: {} tolerations: [] registry: quay.io v2.4.1 # kubectl apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false imagePullPolicy: IfNotPresent iscsi: chapPassword: \"\" chapUser: \"\" kubeletRootDir: /var/lib/kubelet/ logLevel: info node: affinity: {} labels: {} nodeSelector: {} tolerations: [] registry: quay.io Tip The contents depends on which version of the CSI driver is installed. Please visit OperatorHub or ArtifactHub for more details. The CSI driver is now ready for use. Proceed to the next section to learn about adding an HPE storage backend .","title":"Upstream Kubernetes and Others"},{"location":"csi_driver/deployment.html#add_an_hpe_storage_backend","text":"Once the CSI driver is deployed, two additional objects needs to be created to get started with dynamic provisioning of persistent storage, a Secret and a StorageClass . Tip Naming the Secret and StorageClass is entirely up to the user, however, to keep up with the examples on SCOD, it's highly recommended to use the names illustrated here.","title":"Add an HPE Storage Backend"},{"location":"csi_driver/deployment.html#secret_parameters","text":"All parameters are mandatory and described below. Parameter Description serviceName This hostname or IP address where the Container Storage Provider (CSP) is running, usually a Kubernetes Service , such as \"alletra6000-csp-svc\" or \"alletra9000-csp-svc\" servicePort This is port the serviceName is listening to. backend This is the management hostname or IP address of the actual backend storage system, such as an Alletra 5000/6000 or 9000 array. username Backend storage system username with the correct privileges to perform storage management. password Backend storage system password. Example: HPE Alletra Storage MP apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: alletrastoragemp-csp-svc servicePort: \"8080\" backend: 10.10.0.20 username: 3paradm password: 3pardata HPE Alletra 5000/6000 apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: alletra6000-csp-svc servicePort: \"8080\" backend: 192.168.1.110 username: admin password: admin HPE Alletra 9000 apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: alletra9000-csp-svc servicePort: \"8080\" backend: 10.10.0.20 username: 3paradm password: 3pardata HPE Nimble Storage apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: nimble-csp-svc servicePort: \"8080\" backend: 192.168.1.2 username: admin password: admin HPE Primera and 3PAR apiVersion: v1 kind: Secret metadata: name: hpe-backend namespace: hpe-storage stringData: serviceName: primera3par-csp-svc servicePort: \"8080\" backend: 10.10.0.2 username: 3paradm password: 3pardata Create the Secret using kubectl : kubectl create -f secret.yaml Tip In a real world scenario it's more practical to name the Secret something that makes sense for the organization. It could be the hostname of the backend or the role it carries, i.e \"hpe-alletra-sanjose-prod\". Next step involves creating a default StorageClass .","title":"Secret Parameters"},{"location":"csi_driver/deployment.html#configuring_additional_storage_backends","text":"It's not uncommon to have multiple HPE primary storage systems within the same environment, either the same family or different ones. This section walks through the scenario of managing multiple StorageClass and Secret API objects to represent an environment with multiple systems. There's a brief tutorial available in the Video Gallery that walks through these steps. Note Make note of the Kubernetes Namespace or OpenShift project name used during the deployment. In the following examples, we will be using the \"hpe-storage\" Namespace . To view the current Secrets in the \"hpe-storage\" Namespace (assuming default names): kubectl -n hpe-storage get secret/hpe-backend NAME TYPE DATA AGE hpe-backend Opaque 5 2m This Secret is used by the CSI sidecars in the StorageClass to authenticate to a specific backend for CSI operations. In order to add a new Secret or manage access to multiple backends, additional Secrets will need to be created per backend. Secret Requirements Each Secret name must be unique. servicePort should be set to 8080 . To create a new Secret , specify the name, Namespace , backend username, backend password and the backend IP address to be used by the CSP and save it as custom-secret.yaml (a detailed description of the parameters are available above ). HPE Alletra Storage MP apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: alletrastoragemp-csp-svc servicePort: \"8080\" backend: 10.10.0.20 username: 3paradm password: 3pardata HPE Alletra 5000/6000 apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: alletra6000-csp-svc servicePort: \"8080\" backend: 192.168.1.110 username: admin password: admin HPE Alletra 9000 apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: alletra9000-csp-svc servicePort: \"8080\" backend: 10.10.0.20 username: 3paradm password: 3pardata HPE Nimble Storage apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: nimble-csp-svc servicePort: \"8080\" backend: 192.168.1.2 username: admin password: admin HPE Primera and 3PAR apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: primera3par-csp-svc servicePort: \"8080\" backend: 10.10.0.2 username: 3paradm password: 3pardata Create the Secret using kubectl : kubectl create -f custom-secret.yaml You should now see the Secret in the \"hpe-storage\" Namespace : kubectl -n hpe-storage get secret/custom-secret NAME TYPE DATA AGE custom-secret Opaque 5 1m","title":"Configuring Additional Storage Backends"},{"location":"csi_driver/deployment.html#create_a_storageclass_with_the_custom_secret","text":"To use the new Secret \"custom-secret\", create a new StorageClass using the Secret and the necessary StorageClass parameters. Please see the requirements section of the respective CSP . K8s 1.15+ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-custom provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: custom-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by using a custom Secret with the HPE CSI Driver for Kubernetes\" reclaimPolicy: Delete allowVolumeExpansion: true K8s 1.14 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-custom provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/resizer-secret-name: custom-secret csi.storage.k8s.io/resizer-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by using a custom Secret with the HPE CSI Driver for Kubernetes\" reclaimPolicy: Delete allowVolumeExpansion: true Note Don't forget to call out the StorageClass explicitly when creating PVCs from non-default StorageClasses . Next, Create a PersistentVolumeClaim from a StorageClass .","title":"Create a StorageClass with the Custom Secret"},{"location":"csi_driver/deployment.html#advanced_install","text":"This guide is primarily written to accommodate a highly manual installation on upstream Kubernetes or partner OEMs engaged with HPE to bundle the HPE CSI Driver in a custom distribution. Installation steps may vary for different vendors and flavors of Kubernetes. The following example walks through deployment of the latest CSI driver. Critical It's highly recommended to use either the Helm chart or Operator to install the HPE CSI Driver for Kubernetes and the associated Container Storage Providers. Only venture down manual installation if your requirements can't be met by the Helm chart or Operator .","title":"Advanced Install"},{"location":"csi_driver/deployment.html#manual_csi_driver_install","text":"Deploy the CSI driver and sidecars for the relevant Kubernetes version. Uninstalling the CSI driver when installed manually The manifests below create a number of objects, including CustomResourceDefinitions (CRDs) which may hold critical information about storage resources. Simply deleting the below manifests in order to uninstall the CSI driver may render PersistentVolumes unusable.","title":"Manual CSI Driver Install"},{"location":"csi_driver/deployment.html#common","text":"These object configuration files are common for all versions of Kubernetes. All components below are deployed in the \"hpe-storage\" Namespace . kubectl create ns hpe-storage Worker node IO settings and common CRDs : kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-linux-config.yaml kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-volumegroup-snapshotgroup-crds.yaml Container Storage Provider: HPE Alletra 5000/6000 and Nimble Storage kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/nimble-csp.yaml HPE Alletra Storage MP, HPE Alletra 9000, Primera and 3PAR kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-csp.yaml kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-crd.yaml Important The above instructions assumes you have an array with a supported platform OS installed. Please see the requirements section of the respective CSP . Install the CSI driver: Kubernetes 1.29 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.29.yaml Kubernetes 1.28 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.28.yaml Kubernetes 1.27 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.27.yaml Kubernetes 1.26 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.26.yaml Seealso Older and unsupported versions of Kubernetes and the CSI driver are archived on this page . Depending on which version is being deployed, different API objects gets created. Next step: Add an HPE Storage Backend .","title":"Common"},{"location":"csi_driver/deployment.html#advanced_uninstall","text":"The following steps outline how to uninstall the CSI driver that has been deployed using the Advanced Install above. Uninstall Worker node settings: kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-linux-config.yaml Uninstall relevant Container Storage Provider: HPE Alletra 5000/6000 and Nimble Storage kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/nimble-csp.yaml HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/3par-primera-csp.yaml HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR users If you are reinstalling the HPE CSI Driver, DO NOT remove the crd/hpevolumeinfos.storage.hpe.com resource. This CustomResourceDefinition contains important volume metadata used by the HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR CSP. HPE CSI Driver v2.0.0 and below share the same YAML file for crds and CSP and would require a manual removal of the individual Service and Deployment in the \"hpe-storage\" Namespace . Uninstall the CSI driver: Kubernetes 1.29 kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.29.yaml Kubernetes 1.28 kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.28.yaml Kubernetes 1.27 kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.27.yaml Kubernetes 1.26 kubectl delete -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.2/hpe-csi-k8s-1.26.yaml If no longer needed, delete the \"hpe-storage\" Namespace . kubectl delete ns hpe-storage","title":"Advanced Uninstall"},{"location":"csi_driver/deployment.html#downgrading_the_csi_driver","text":"Downgrading the CSI driver is currently not supported. It will work between certain minor versions. HPE does not test or document procedures to downgrade between incompatible versions.","title":"Downgrading the CSI driver"},{"location":"csi_driver/diagnostics.html","text":"Introduction \u00b6 It's recommended to familiarize yourself with inspecting workloads on Kubernetes. This particular cheat sheet is very useful to have readily available. Sanity Checks \u00b6 Once the CSI driver has been deployed either through object configuration files, Helm or an Operator. This view should be representative of what a healthy system should look like after install. If any of the workload deployments lists anything but Running , proceed to inspect the logs of the problematic workload. HPE Alletra 5000/6000 and Nimble Storage kubectl get pods --all-namespaces -l 'app in (nimble-csp, hpe-csi-node, hpe-csi-controller)' NAMESPACE NAME READY STATUS RESTARTS AGE hpe-storage hpe-csi-controller-7d9cd6b855-zzmd9 9/9 Running 0 15s hpe-storage hpe-csi-node-dk5t4 2/2 Running 0 15s hpe-storage hpe-csi-node-pwq2d 2/2 Running 0 15s hpe-storage nimble-csp-546c9c4dd4-5lsdt 1/1 Running 0 15s HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR kubectl get pods --all-namespaces -l 'app in (primera3par-csp, hpe-csi-node, hpe-csi-controller)' NAMESPACE NAME READY STATUS RESTARTS AGE hpe-storage hpe-csi-controller-7d9cd6b855-fqppd 9/9 Running 0 14s hpe-storage hpe-csi-node-86kh6 2/2 Running 0 14s hpe-storage hpe-csi-node-k8p4p 2/2 Running 0 14s hpe-storage hpe-csi-node-r2mg8 2/2 Running 0 14s hpe-storage hpe-csi-node-vwb5r 2/2 Running 0 14s hpe-storage primera3par-csp-546c9c4dd4-bcwc6 1/1 Running 0 14s A Custom Resource Definition (CRD) named hpenodeinfos.storage.hpe.com holds important network and host initiator information. Retrieve list of nodes. kubectl get hpenodeinfos $ kubectl get hpenodeinfos NAME AGE tme-lnx-worker1 57m tme-lnx-worker3 57m tme-lnx-worker2 57m tme-lnx-worker4 57m Inspect a node. kubectl get hpenodeinfos/tme-lnx-worker1 -o yaml apiVersion: storage.hpe.com/v1 kind: HPENodeInfo metadata: creationTimestamp: \"2020-08-24T23:50:09Z\" generation: 1 managedFields: - apiVersion: storage.hpe.com/v1 fieldsType: FieldsV1 fieldsV1: f:spec: .: {} f:chap_password: {} f:chap_user: {} f:iqns: {} f:networks: {} f:uuid: {} manager: csi-driver operation: Update time: \"2020-08-24T23:50:09Z\" name: tme-lnx-worker1 resourceVersion: \"30337986\" selfLink: /apis/storage.hpe.com/v1/hpenodeinfos/tme-lnx-worker1 uid: 3984752b-29ac-48de-8ca0-8381532cbf06 spec: chap_password: RGlkIHlvdSByZWFsbHkgZGVjb2RlIHRoaXM/ chap_user: chap-user iqns: - iqn.1994-05.com.redhat:828e7a4eef40 networks: - 10.2.2.2/16 - 172.16.6.115/24 - 172.16.8.115/24 - 172.17.0.1/16 - 10.1.1.0/12 uuid: 0242f811-3995-746d-652d-6c6e78352d77 NFS Server Provisioner Resources \u00b6 The NFS Server Provisioner consists of a number of Kubernetes resources per PVC. The default Namespace where the resources are deployed is \"hpe-nfs\" but is configurable in the StorageClass . See base StorageClass parameters for more details. Object Name Purpose ConfigMap hpe-nfs-config This ConfigMap holds the configuration file for the NFS server. Local tweaks may be wanted. Please see the config file reference for more details. Deployment hpe-nfs-UID The Deployment that is running the NFS Pod . Service hpe-nfs-UID The Service the NFS clients perform mounts against. PVC hpe-nfs-UID The RWO claim serving the NFS workload. Tip The UID stems from the user request RWX PVC for easy tracking. Use kubectl get pvc/my-pvc -o jsonpath='{.metadata.uid}{\"\\n\"}' to retrieve it. Tracing NFS resources \u00b6 When troubleshooting NFS deployments it's common that only the source RWX PVC and Namespace is known. The next few steps explains how resources can be easily traced. Retrieve the \"hpe-nfs-UID\" from the NFS Pod by specifying PVC and Namespace of the RWX PVC : kubectl get pods -l provisioned-by=my-pvc,provisioned-from=my-namespace -A -o jsonpath='{.items[].metadata.labels.app}{\"\\n\"}' Next, enumerate the resources from the \"hpe-nfs-UID\": kubectl get pvc,svc,deploy -A -o name --field-selector metadata.name=hpe-nfs-UID Example output: persistentvolumeclaim/hpe-nfs-UID service/hpe-nfs-UID deployment.apps/hpe-nfs-UID If only the PV name is known, looking from the backend storage perspective, the PV name (and .spec.claimRef.uid ) contains the UID, for example: \"pvc-UID\". Clarification The hpe-nfs-UID is abbreviated, it will contain a real UID added on, for example \"hpe-nfs-98ce7c80-13f9-45d0-9609-089227bf97f1\". Volume and Snapshot Groups \u00b6 If there's issues with VolumeSnapshots not being created when performing SnapshotGroup snapshots, checking the logs of the \"csi-volume-group-provisioner\" and \"csi-volume-group-snapshotter\" in the \"hpe-csi-controller\" Deployment . kubectl logs -n hpe-storage deploy/hpe-csi-controller csi-volume-group-provisioner kubectl logs -n hpe-storage deploy/hpe-csi-controller csi-volume-group-snapshotter Logging \u00b6 Log files associated with the HPE CSI Driver logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution for Kubernetes such as Fluentd. Some of the logs on the host are persisted which follow standard logrotate policies. CSI Driver Logs \u00b6 Node driver: kubectl logs -f daemonset.apps/hpe-csi-node hpe-csi-driver -n hpe-storage Controller driver: kubectl logs -f deployment.apps/hpe-csi-controller hpe-csi-driver -n hpe-storage Tip The logs for both node and controller drivers are persisted at /var/log/hpe-csi.log Log Level \u00b6 Log levels for both CSI Controller and Node driver can be controlled using LOG_LEVEL environment variable. Possible values are info , warn , error , debug , and trace . Apply the changes using kubectl apply -f command after adding this to CSI controller and node container spec as below. For Helm charts this is controlled through logLevel variable in values.yaml . env: - name: LOG_LEVEL value: trace CSP Logs \u00b6 CSP logs can be accessed from their respective services. HPE Alletra 5000/6000 and Nimble Storage kubectl logs -f deploy/nimble-csp -n hpe-storage HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR kubectl logs -f deploy/primera3par-csp -n hpe-storage Log Collector \u00b6 Log collector script hpe-logcollector.sh can be used to collect the logs from any node which has kubectl access to the cluster. curl -O https://raw.githubusercontent.com/hpe-storage/csi-driver/master/hpe-logcollector.sh chmod 555 hpe-logcollector.sh Usage: ./hpe-logcollector.sh -h Collect HPE storage diagnostic logs using kubectl. Usage: hpe-logcollector.sh [-h|--help] [--node-name NODE_NAME] \\ [-n|--namespace NAMESPACE] [-a|--all] Options: -h|--help Print this usage text --node-name NODE_NAME Collect logs only for Kubernetes node NODE_NAME -n|--namespace NAMESPACE Collect logs from HPE CSI deployment in namespace NAMESPACE (default: kube-system) -a|--all Collect logs from all nodes (the default) Tuning \u00b6 HPE provides a set of well tested defaults for the CSI driver and all the supported CSPs. In certain case it may be necessary to fine tune the CSI driver to accommodate a certain workload or behavior. Data Path Configuration \u00b6 The HPE CSI Driver for Kubernetes automatically configures Linux iSCSI/multipath settings based on config.json . In order to tune these values, edit the config map with kubectl edit configmap hpe-linux-config -n hpe-storage and restart node plugin using kubectl delete pod -l app=hpe-csi-node to apply. Important HPE provide a set of general purpose default values for the IO paths, tuning is only required if prescribed by HPE.","title":"Diagnostics"},{"location":"csi_driver/diagnostics.html#introduction","text":"It's recommended to familiarize yourself with inspecting workloads on Kubernetes. This particular cheat sheet is very useful to have readily available.","title":"Introduction"},{"location":"csi_driver/diagnostics.html#sanity_checks","text":"Once the CSI driver has been deployed either through object configuration files, Helm or an Operator. This view should be representative of what a healthy system should look like after install. If any of the workload deployments lists anything but Running , proceed to inspect the logs of the problematic workload. HPE Alletra 5000/6000 and Nimble Storage kubectl get pods --all-namespaces -l 'app in (nimble-csp, hpe-csi-node, hpe-csi-controller)' NAMESPACE NAME READY STATUS RESTARTS AGE hpe-storage hpe-csi-controller-7d9cd6b855-zzmd9 9/9 Running 0 15s hpe-storage hpe-csi-node-dk5t4 2/2 Running 0 15s hpe-storage hpe-csi-node-pwq2d 2/2 Running 0 15s hpe-storage nimble-csp-546c9c4dd4-5lsdt 1/1 Running 0 15s HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR kubectl get pods --all-namespaces -l 'app in (primera3par-csp, hpe-csi-node, hpe-csi-controller)' NAMESPACE NAME READY STATUS RESTARTS AGE hpe-storage hpe-csi-controller-7d9cd6b855-fqppd 9/9 Running 0 14s hpe-storage hpe-csi-node-86kh6 2/2 Running 0 14s hpe-storage hpe-csi-node-k8p4p 2/2 Running 0 14s hpe-storage hpe-csi-node-r2mg8 2/2 Running 0 14s hpe-storage hpe-csi-node-vwb5r 2/2 Running 0 14s hpe-storage primera3par-csp-546c9c4dd4-bcwc6 1/1 Running 0 14s A Custom Resource Definition (CRD) named hpenodeinfos.storage.hpe.com holds important network and host initiator information. Retrieve list of nodes. kubectl get hpenodeinfos $ kubectl get hpenodeinfos NAME AGE tme-lnx-worker1 57m tme-lnx-worker3 57m tme-lnx-worker2 57m tme-lnx-worker4 57m Inspect a node. kubectl get hpenodeinfos/tme-lnx-worker1 -o yaml apiVersion: storage.hpe.com/v1 kind: HPENodeInfo metadata: creationTimestamp: \"2020-08-24T23:50:09Z\" generation: 1 managedFields: - apiVersion: storage.hpe.com/v1 fieldsType: FieldsV1 fieldsV1: f:spec: .: {} f:chap_password: {} f:chap_user: {} f:iqns: {} f:networks: {} f:uuid: {} manager: csi-driver operation: Update time: \"2020-08-24T23:50:09Z\" name: tme-lnx-worker1 resourceVersion: \"30337986\" selfLink: /apis/storage.hpe.com/v1/hpenodeinfos/tme-lnx-worker1 uid: 3984752b-29ac-48de-8ca0-8381532cbf06 spec: chap_password: RGlkIHlvdSByZWFsbHkgZGVjb2RlIHRoaXM/ chap_user: chap-user iqns: - iqn.1994-05.com.redhat:828e7a4eef40 networks: - 10.2.2.2/16 - 172.16.6.115/24 - 172.16.8.115/24 - 172.17.0.1/16 - 10.1.1.0/12 uuid: 0242f811-3995-746d-652d-6c6e78352d77","title":"Sanity Checks"},{"location":"csi_driver/diagnostics.html#nfs_server_provisioner_resources","text":"The NFS Server Provisioner consists of a number of Kubernetes resources per PVC. The default Namespace where the resources are deployed is \"hpe-nfs\" but is configurable in the StorageClass . See base StorageClass parameters for more details. Object Name Purpose ConfigMap hpe-nfs-config This ConfigMap holds the configuration file for the NFS server. Local tweaks may be wanted. Please see the config file reference for more details. Deployment hpe-nfs-UID The Deployment that is running the NFS Pod . Service hpe-nfs-UID The Service the NFS clients perform mounts against. PVC hpe-nfs-UID The RWO claim serving the NFS workload. Tip The UID stems from the user request RWX PVC for easy tracking. Use kubectl get pvc/my-pvc -o jsonpath='{.metadata.uid}{\"\\n\"}' to retrieve it.","title":"NFS Server Provisioner Resources"},{"location":"csi_driver/diagnostics.html#tracing_nfs_resources","text":"When troubleshooting NFS deployments it's common that only the source RWX PVC and Namespace is known. The next few steps explains how resources can be easily traced. Retrieve the \"hpe-nfs-UID\" from the NFS Pod by specifying PVC and Namespace of the RWX PVC : kubectl get pods -l provisioned-by=my-pvc,provisioned-from=my-namespace -A -o jsonpath='{.items[].metadata.labels.app}{\"\\n\"}' Next, enumerate the resources from the \"hpe-nfs-UID\": kubectl get pvc,svc,deploy -A -o name --field-selector metadata.name=hpe-nfs-UID Example output: persistentvolumeclaim/hpe-nfs-UID service/hpe-nfs-UID deployment.apps/hpe-nfs-UID If only the PV name is known, looking from the backend storage perspective, the PV name (and .spec.claimRef.uid ) contains the UID, for example: \"pvc-UID\". Clarification The hpe-nfs-UID is abbreviated, it will contain a real UID added on, for example \"hpe-nfs-98ce7c80-13f9-45d0-9609-089227bf97f1\".","title":"Tracing NFS resources"},{"location":"csi_driver/diagnostics.html#volume_and_snapshot_groups","text":"If there's issues with VolumeSnapshots not being created when performing SnapshotGroup snapshots, checking the logs of the \"csi-volume-group-provisioner\" and \"csi-volume-group-snapshotter\" in the \"hpe-csi-controller\" Deployment . kubectl logs -n hpe-storage deploy/hpe-csi-controller csi-volume-group-provisioner kubectl logs -n hpe-storage deploy/hpe-csi-controller csi-volume-group-snapshotter","title":"Volume and Snapshot Groups"},{"location":"csi_driver/diagnostics.html#logging","text":"Log files associated with the HPE CSI Driver logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution for Kubernetes such as Fluentd. Some of the logs on the host are persisted which follow standard logrotate policies.","title":"Logging"},{"location":"csi_driver/diagnostics.html#csi_driver_logs","text":"Node driver: kubectl logs -f daemonset.apps/hpe-csi-node hpe-csi-driver -n hpe-storage Controller driver: kubectl logs -f deployment.apps/hpe-csi-controller hpe-csi-driver -n hpe-storage Tip The logs for both node and controller drivers are persisted at /var/log/hpe-csi.log","title":"CSI Driver Logs"},{"location":"csi_driver/diagnostics.html#log_level","text":"Log levels for both CSI Controller and Node driver can be controlled using LOG_LEVEL environment variable. Possible values are info , warn , error , debug , and trace . Apply the changes using kubectl apply -f command after adding this to CSI controller and node container spec as below. For Helm charts this is controlled through logLevel variable in values.yaml . env: - name: LOG_LEVEL value: trace","title":"Log Level"},{"location":"csi_driver/diagnostics.html#csp_logs","text":"CSP logs can be accessed from their respective services. HPE Alletra 5000/6000 and Nimble Storage kubectl logs -f deploy/nimble-csp -n hpe-storage HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR kubectl logs -f deploy/primera3par-csp -n hpe-storage","title":"CSP Logs"},{"location":"csi_driver/diagnostics.html#log_collector","text":"Log collector script hpe-logcollector.sh can be used to collect the logs from any node which has kubectl access to the cluster. curl -O https://raw.githubusercontent.com/hpe-storage/csi-driver/master/hpe-logcollector.sh chmod 555 hpe-logcollector.sh Usage: ./hpe-logcollector.sh -h Collect HPE storage diagnostic logs using kubectl. Usage: hpe-logcollector.sh [-h|--help] [--node-name NODE_NAME] \\ [-n|--namespace NAMESPACE] [-a|--all] Options: -h|--help Print this usage text --node-name NODE_NAME Collect logs only for Kubernetes node NODE_NAME -n|--namespace NAMESPACE Collect logs from HPE CSI deployment in namespace NAMESPACE (default: kube-system) -a|--all Collect logs from all nodes (the default)","title":"Log Collector"},{"location":"csi_driver/diagnostics.html#tuning","text":"HPE provides a set of well tested defaults for the CSI driver and all the supported CSPs. In certain case it may be necessary to fine tune the CSI driver to accommodate a certain workload or behavior.","title":"Tuning"},{"location":"csi_driver/diagnostics.html#data_path_configuration","text":"The HPE CSI Driver for Kubernetes automatically configures Linux iSCSI/multipath settings based on config.json . In order to tune these values, edit the config map with kubectl edit configmap hpe-linux-config -n hpe-storage and restart node plugin using kubectl delete pod -l app=hpe-csi-node to apply. Important HPE provide a set of general purpose default values for the IO paths, tuning is only required if prescribed by HPE.","title":"Data Path Configuration"},{"location":"csi_driver/install_legacy.html","text":"Legacy Versions \u00b6 Older versions of the HPE CSI Driver for Kubernetes are kept here for reference. Check the CSI driver GitHub repo for the appropriate YAML files to declare on the cluster for the respective version of Kubernetes. Important The resources for CSPs, CRDs and ConfigMaps are available in each respective CSI driver version directory here . Use the below version mappings as reference. Kubernetes 1.25 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.0/hpe-csi-k8s-1.25.yaml Note Latest supported CSI driver version is 2.4.0 for Kubernetes 1.25. Kubernetes 1.24 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.3.0/hpe-csi-k8s-1.24.yaml Note Latest supported CSI driver version is 2.3.0 for Kubernetes 1.24. Kubernetes 1.23 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.3.0/hpe-csi-k8s-1.23.yaml Note Latest supported CSI driver version is 2.3.0 for Kubernetes 1.23. Kubernetes 1.22 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.2.0/hpe-csi-k8s-1.22.yaml Note Latest supported CSI driver version is 2.2.0 for Kubernetes 1.22. Kubernetes 1.21 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.2.0/hpe-csi-k8s-1.21.yaml Note Latest supported CSI driver version is 2.2.0 for Kubernetes 1.21. Kubernetes 1.20 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.1.1/hpe-csi-k8s-1.20.yaml Note Latest supported CSI driver version is 2.1.1 for Kubernetes 1.20. Kubernetes 1.19 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.0.0/hpe-csi-k8s-1.19.yaml Note Latest supported CSI driver version is 2.0.0 for Kubernetes 1.19. Kubernetes 1.18 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.0.0/hpe-csi-k8s-1.18.yaml Note Latest supported CSI driver version is 2.0.0 for Kubernetes 1.18. Kubernetes 1.17 \u00b6 kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v1.4.0/hpe-csi-k8s-1.17.yaml Note Latest supported CSI driver version is 1.4.0 for Kubernetes 1.17. Kubernetes 1.16 \u00b6 Object definitons for HPE CSI Driver for Kubernetes v1.3.0 Note Latest supported CSI driver version is 1.3.0 for Kubernetes 1.16. Kubernetes 1.15 \u00b6 Object definitons for HPE CSI Driver for Kubernetes v1.3.0 Note Latest supported CSI driver version is 1.3.0 for Kubernetes 1.15. Kubernetes 1.14 \u00b6 Object definitons for HPE CSI Driver for Kubernetes v1.2.0 Note Latest supported CSI driver version is 1.2.0 for Kubernetes 1.14. Kubernetes 1.13 \u00b6 Object definitons for HPE CSI Driver for Kubernetes v1.1.0 Note Latest supported CSI driver version is 1.1.0 for Kubernetes 1.13.","title":"Install legacy"},{"location":"csi_driver/install_legacy.html#legacy_versions","text":"Older versions of the HPE CSI Driver for Kubernetes are kept here for reference. Check the CSI driver GitHub repo for the appropriate YAML files to declare on the cluster for the respective version of Kubernetes. Important The resources for CSPs, CRDs and ConfigMaps are available in each respective CSI driver version directory here . Use the below version mappings as reference.","title":"Legacy Versions"},{"location":"csi_driver/install_legacy.html#kubernetes_125","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.4.0/hpe-csi-k8s-1.25.yaml Note Latest supported CSI driver version is 2.4.0 for Kubernetes 1.25.","title":"Kubernetes 1.25"},{"location":"csi_driver/install_legacy.html#kubernetes_124","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.3.0/hpe-csi-k8s-1.24.yaml Note Latest supported CSI driver version is 2.3.0 for Kubernetes 1.24.","title":"Kubernetes 1.24"},{"location":"csi_driver/install_legacy.html#kubernetes_123","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.3.0/hpe-csi-k8s-1.23.yaml Note Latest supported CSI driver version is 2.3.0 for Kubernetes 1.23.","title":"Kubernetes 1.23"},{"location":"csi_driver/install_legacy.html#kubernetes_122","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.2.0/hpe-csi-k8s-1.22.yaml Note Latest supported CSI driver version is 2.2.0 for Kubernetes 1.22.","title":"Kubernetes 1.22"},{"location":"csi_driver/install_legacy.html#kubernetes_121","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.2.0/hpe-csi-k8s-1.21.yaml Note Latest supported CSI driver version is 2.2.0 for Kubernetes 1.21.","title":"Kubernetes 1.21"},{"location":"csi_driver/install_legacy.html#kubernetes_120","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.1.1/hpe-csi-k8s-1.20.yaml Note Latest supported CSI driver version is 2.1.1 for Kubernetes 1.20.","title":"Kubernetes 1.20"},{"location":"csi_driver/install_legacy.html#kubernetes_119","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.0.0/hpe-csi-k8s-1.19.yaml Note Latest supported CSI driver version is 2.0.0 for Kubernetes 1.19.","title":"Kubernetes 1.19"},{"location":"csi_driver/install_legacy.html#kubernetes_118","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v2.0.0/hpe-csi-k8s-1.18.yaml Note Latest supported CSI driver version is 2.0.0 for Kubernetes 1.18.","title":"Kubernetes 1.18"},{"location":"csi_driver/install_legacy.html#kubernetes_117","text":"kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/v1.4.0/hpe-csi-k8s-1.17.yaml Note Latest supported CSI driver version is 1.4.0 for Kubernetes 1.17.","title":"Kubernetes 1.17"},{"location":"csi_driver/install_legacy.html#kubernetes_116","text":"Object definitons for HPE CSI Driver for Kubernetes v1.3.0 Note Latest supported CSI driver version is 1.3.0 for Kubernetes 1.16.","title":"Kubernetes 1.16"},{"location":"csi_driver/install_legacy.html#kubernetes_115","text":"Object definitons for HPE CSI Driver for Kubernetes v1.3.0 Note Latest supported CSI driver version is 1.3.0 for Kubernetes 1.15.","title":"Kubernetes 1.15"},{"location":"csi_driver/install_legacy.html#kubernetes_114","text":"Object definitons for HPE CSI Driver for Kubernetes v1.2.0 Note Latest supported CSI driver version is 1.2.0 for Kubernetes 1.14.","title":"Kubernetes 1.14"},{"location":"csi_driver/install_legacy.html#kubernetes_113","text":"Object definitons for HPE CSI Driver for Kubernetes v1.1.0 Note Latest supported CSI driver version is 1.1.0 for Kubernetes 1.13.","title":"Kubernetes 1.13"},{"location":"csi_driver/metrics.html","text":"HPE CSI Info Metrics Provider for Prometheus \u00b6 The HPE CSI Driver for Kubernetes may be accompanied by a Prometheus metrics endpoint to provide metadata about the volumes provisioned by the CSI driver and supporting backends. It's conventionally deployed with HPE Storage Array Exporter for Prometheus to provide a richer set of metrics from the backend storage systems. HPE CSI Info Metrics Provider for Prometheus Metrics Provided Volume Info Backend Info Deployment Helm Rancher Advanced Install Version 1.0.3 Configuring Advanced Install Grafana Dashboards Metrics Provided \u00b6 The exporter provides two metrics, \"hpestoragecsi_volume_info\" and \"hpestoragecsi_backend_info\". Volume Info \u00b6 Metric Type Description Value hpestoragecsi_volume_info Gauge Indicates a volume whose provisioner is the HPE CSI Driver. 1 This metric includes the following labels. Label Description backend Backend hostname or IP address as defined in the Secret . pv PersistentVolume name. pvc PersistentVolumeClaim name. pvc_namespace PersistentVolumeClaim Namespace . storage_class StorageClass used to provision the PersistentVolume . volume Volume handle used by the backend storage system. Backend Info \u00b6 Metric Type Description Value hpestoragecsi_backend_info Gauge Indicates a storage system for which the HPE CSI driver is a provisioner. 1 This metric includes the following labels. Label Description backend Backend hostname or IP address as defined in the Secret . Deployment \u00b6 The exporter may be installed either via Helm or through YAML manifests with the object definitions. It's recommended to use Helm as it's more convenient to manage the configuration of the deployment. Note It's recommended to add a \"cluster\" target label to the deployment. The label is used in the provided Grafana dashboards . Helm \u00b6 The Helm chart is available on Artifact Hub. Instructions on how to manage and install the chart is available within the chart documentation. HPE CSI Info Metrics Provider for Prometheus Helm chart Note It's highly recommended to install the CSI Info Metrics Provider with Helm. Rancher \u00b6 Since Rancher v2.7 and HPE CSI Driver for Kubernetes v2.3.0 it's possible to install the HPE CSI Info Metrics Provider through the Apps interface in Rancher to use with Rancher Monitoring. Please see the Rancher partner page for more information. Advanced Install \u00b6 Before beginning an advanced install, determine how Prometheus will be deployed on the Kubernetes cluster as it will dictate how the scrape target will be configured with either a Service annotation or a ServiceMonitor CRD. Start by downloading the manifest, which needs to be modified before applying to the cluster. Version 1.0.3 \u00b6 Supports HPE CSI Driver for Kubernetes 2.0.0 and later. wget https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-info-metrics/v1.0.3/hpe-csi-info-metrics.yaml Optional ServiceMonitor definition: wget https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-info-metrics/v1.0.3/hpe-csi-info-metrics-service-monitor.yaml Configuring Advanced Install \u00b6 Update the main container parameters and optionally add service labels and annotations. In the \"hpe-csi-info-metrics\" Deployment at .spec.template.spec.containers[0].args in \"hpe-csi-info-metrics.yaml\": args: - \"--telemetry.addr=:9099\" - \"--telemetry.path=/metrics\" # IMPORTANT: Uncomment this argument to confirm your # acceptance of the HPE End User License Agreement at # https://www.hpe.com/us/en/software/licensing.html #- \"--accept-eula\" Remove the # in front of --accept-eula to accept the HPE license restrictions . In the \"hpe-csi-info-metrics-service\" Service : metadata: name: hpe-csi-info-metrics-service namespace: hpe-storage labels: app: hpe-csi-info-metrics # Optionally add labels, for example to be included in Prometheus # metrics via a targetLabels setting in a ServiceMonitor spec #cluster: my-cluster # Optionally add annotations, for example to configure it as a # scrape target when using the Prometheus Helm chart's default # configuration. #annotations: # \"prometheus.io/scrape\": \"true\" Apply and uncomment any custom labels. It's recommended to use a \"cluster\" label to use the provided Grafana dashboards . If Prometheus has been deployed without the Operator, uncomment the annotation. Apply the manifest: kubectl apply -f hpe-csi-info-metrics.yaml Optionally, if using the Prometheus Operator, add any additional labels in \"hpe-csi-info-metrics-service-monitor.yaml\": # Corresponding labels on the CSI Info Metrics service are added to # the scraped metrics #targetLabels: # - cluster Apply the manifest: kubectl apply -f hpe-csi-info-metrics-service-monitor.yaml Pro Tip! Avoid hand editing manifests by using the Helm chart. Grafana Dashboards \u00b6 Example Grafana dashboards, provided as is, are hosted on grafana.com .","title":"Metrics"},{"location":"csi_driver/metrics.html#hpe_csi_info_metrics_provider_for_prometheus","text":"The HPE CSI Driver for Kubernetes may be accompanied by a Prometheus metrics endpoint to provide metadata about the volumes provisioned by the CSI driver and supporting backends. It's conventionally deployed with HPE Storage Array Exporter for Prometheus to provide a richer set of metrics from the backend storage systems. HPE CSI Info Metrics Provider for Prometheus Metrics Provided Volume Info Backend Info Deployment Helm Rancher Advanced Install Version 1.0.3 Configuring Advanced Install Grafana Dashboards","title":"HPE CSI Info Metrics Provider for Prometheus"},{"location":"csi_driver/metrics.html#metrics_provided","text":"The exporter provides two metrics, \"hpestoragecsi_volume_info\" and \"hpestoragecsi_backend_info\".","title":"Metrics Provided"},{"location":"csi_driver/metrics.html#volume_info","text":"Metric Type Description Value hpestoragecsi_volume_info Gauge Indicates a volume whose provisioner is the HPE CSI Driver. 1 This metric includes the following labels. Label Description backend Backend hostname or IP address as defined in the Secret . pv PersistentVolume name. pvc PersistentVolumeClaim name. pvc_namespace PersistentVolumeClaim Namespace . storage_class StorageClass used to provision the PersistentVolume . volume Volume handle used by the backend storage system.","title":"Volume Info"},{"location":"csi_driver/metrics.html#backend_info","text":"Metric Type Description Value hpestoragecsi_backend_info Gauge Indicates a storage system for which the HPE CSI driver is a provisioner. 1 This metric includes the following labels. Label Description backend Backend hostname or IP address as defined in the Secret .","title":"Backend Info"},{"location":"csi_driver/metrics.html#deployment","text":"The exporter may be installed either via Helm or through YAML manifests with the object definitions. It's recommended to use Helm as it's more convenient to manage the configuration of the deployment. Note It's recommended to add a \"cluster\" target label to the deployment. The label is used in the provided Grafana dashboards .","title":"Deployment"},{"location":"csi_driver/metrics.html#helm","text":"The Helm chart is available on Artifact Hub. Instructions on how to manage and install the chart is available within the chart documentation. HPE CSI Info Metrics Provider for Prometheus Helm chart Note It's highly recommended to install the CSI Info Metrics Provider with Helm.","title":"Helm"},{"location":"csi_driver/metrics.html#rancher","text":"Since Rancher v2.7 and HPE CSI Driver for Kubernetes v2.3.0 it's possible to install the HPE CSI Info Metrics Provider through the Apps interface in Rancher to use with Rancher Monitoring. Please see the Rancher partner page for more information.","title":"Rancher"},{"location":"csi_driver/metrics.html#advanced_install","text":"Before beginning an advanced install, determine how Prometheus will be deployed on the Kubernetes cluster as it will dictate how the scrape target will be configured with either a Service annotation or a ServiceMonitor CRD. Start by downloading the manifest, which needs to be modified before applying to the cluster.","title":"Advanced Install"},{"location":"csi_driver/metrics.html#version_103","text":"Supports HPE CSI Driver for Kubernetes 2.0.0 and later. wget https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-info-metrics/v1.0.3/hpe-csi-info-metrics.yaml Optional ServiceMonitor definition: wget https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-info-metrics/v1.0.3/hpe-csi-info-metrics-service-monitor.yaml","title":"Version 1.0.3"},{"location":"csi_driver/metrics.html#configuring_advanced_install","text":"Update the main container parameters and optionally add service labels and annotations. In the \"hpe-csi-info-metrics\" Deployment at .spec.template.spec.containers[0].args in \"hpe-csi-info-metrics.yaml\": args: - \"--telemetry.addr=:9099\" - \"--telemetry.path=/metrics\" # IMPORTANT: Uncomment this argument to confirm your # acceptance of the HPE End User License Agreement at # https://www.hpe.com/us/en/software/licensing.html #- \"--accept-eula\" Remove the # in front of --accept-eula to accept the HPE license restrictions . In the \"hpe-csi-info-metrics-service\" Service : metadata: name: hpe-csi-info-metrics-service namespace: hpe-storage labels: app: hpe-csi-info-metrics # Optionally add labels, for example to be included in Prometheus # metrics via a targetLabels setting in a ServiceMonitor spec #cluster: my-cluster # Optionally add annotations, for example to configure it as a # scrape target when using the Prometheus Helm chart's default # configuration. #annotations: # \"prometheus.io/scrape\": \"true\" Apply and uncomment any custom labels. It's recommended to use a \"cluster\" label to use the provided Grafana dashboards . If Prometheus has been deployed without the Operator, uncomment the annotation. Apply the manifest: kubectl apply -f hpe-csi-info-metrics.yaml Optionally, if using the Prometheus Operator, add any additional labels in \"hpe-csi-info-metrics-service-monitor.yaml\": # Corresponding labels on the CSI Info Metrics service are added to # the scraped metrics #targetLabels: # - cluster Apply the manifest: kubectl apply -f hpe-csi-info-metrics-service-monitor.yaml Pro Tip! Avoid hand editing manifests by using the Helm chart.","title":"Configuring Advanced Install"},{"location":"csi_driver/metrics.html#grafana_dashboards","text":"Example Grafana dashboards, provided as is, are hosted on grafana.com .","title":"Grafana Dashboards"},{"location":"csi_driver/monitor.html","text":"Introduction \u00b6 The HPE CSI Driver for Kubernetes includes a Kubernetes Pod Monitor. Specifically it looks for Pods with the label monitored-by: hpe-csi and has NodeLost status set on them. This usually occurs if a node becomes unresponsive or partioned due to a network outage. The Pod Monitor will delete the affected Pod and associated HPE CSI Driver VolumeAttachment to allow Kubernetes to reschedule the workload on a healthy node. Introduction CSI Driver Parameters Pod Inclusion Limitations The Pod Monitor is mandatory and automatically applied for the RWX server Deployment managed by the HPE CSI Driver. It may be used for any Pods on the Kubernetes cluster to perform a more graceful automatic recovery rather than performing a manual intervention to resurrect stuck Pods . CSI Driver Parameters \u00b6 The Pod Monitor is part of the \"hpe-csi-controller\" Deployment served by the \"hpe-csi-driver\" container. It's by default enabled and the Pod Monitor interval is set to 30 seconds. Edit the CSI driver deployment to change the interval or disable the Pod Monitor. kubectl edit -n hpe-storage deploy/hpe-csi-controller The parameters that control the \"hpe-csi-driver\" are the following: - --pod-monitor - --pod-monitor-interval=30 Pod Inclusion \u00b6 Enable the Pod Monitor for a single replica Deployment by labeling the Pod (assumes an existing PVC name \"my-pvc\" exists). apiVersion: apps/v1 kind: Deployment metadata: name: my-app labels: app: my-app spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: my-app template: metadata: labels: monitored-by: hpe-csi app: my-app spec: containers: - image: busybox name: busybox command: - \"sleep\" - \"4800\" volumeMounts: - mountPath: /data name: my-vol volumes: - name: my-vol persistentVolumeClaim: claimName: my-pvc Danger It's imperative that failure scenarios that are being mitigated for the application are properly tested before put into production. It's up to the CSP to fence the PersistentVolume attached to an isolated node when a new \"NodePublish\" request comes in. Node isolation is the most dangerous scenario as the workload continues to run on the node when disconnected from the outside world. Simply shutdown the kubelet to test this scenario and ensure the block device become inaccessible to the isolated node. Limitations \u00b6 Kubernetes provide automatic recovery for your applications, not high availability. Expect applications to take minutes (up to 8 minutes with the default tolerations for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable ) to fully recover during a node failure or network partition using the Pod Monitor for Pods with PersistentVolumeClaims . HPE CSI Driver 2.3.0 to 2.4.1 are inffective on StatefulSets due to an upstream API update that did not take the force flag into account. Using the Pod Monitor on a workload controller besides a Deployment configured with .spec.strategy.type \"Recreate\" or a StatefulSet is unsupported. The consequence of using other settings and controllers may have undesired side effects such as rendering \"multi-attach\" errors for PersistentVolumeClaims and may delay recovery.","title":"Pod Monitor"},{"location":"csi_driver/monitor.html#introduction","text":"The HPE CSI Driver for Kubernetes includes a Kubernetes Pod Monitor. Specifically it looks for Pods with the label monitored-by: hpe-csi and has NodeLost status set on them. This usually occurs if a node becomes unresponsive or partioned due to a network outage. The Pod Monitor will delete the affected Pod and associated HPE CSI Driver VolumeAttachment to allow Kubernetes to reschedule the workload on a healthy node. Introduction CSI Driver Parameters Pod Inclusion Limitations The Pod Monitor is mandatory and automatically applied for the RWX server Deployment managed by the HPE CSI Driver. It may be used for any Pods on the Kubernetes cluster to perform a more graceful automatic recovery rather than performing a manual intervention to resurrect stuck Pods .","title":"Introduction"},{"location":"csi_driver/monitor.html#csi_driver_parameters","text":"The Pod Monitor is part of the \"hpe-csi-controller\" Deployment served by the \"hpe-csi-driver\" container. It's by default enabled and the Pod Monitor interval is set to 30 seconds. Edit the CSI driver deployment to change the interval or disable the Pod Monitor. kubectl edit -n hpe-storage deploy/hpe-csi-controller The parameters that control the \"hpe-csi-driver\" are the following: - --pod-monitor - --pod-monitor-interval=30","title":"CSI Driver Parameters"},{"location":"csi_driver/monitor.html#pod_inclusion","text":"Enable the Pod Monitor for a single replica Deployment by labeling the Pod (assumes an existing PVC name \"my-pvc\" exists). apiVersion: apps/v1 kind: Deployment metadata: name: my-app labels: app: my-app spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: my-app template: metadata: labels: monitored-by: hpe-csi app: my-app spec: containers: - image: busybox name: busybox command: - \"sleep\" - \"4800\" volumeMounts: - mountPath: /data name: my-vol volumes: - name: my-vol persistentVolumeClaim: claimName: my-pvc Danger It's imperative that failure scenarios that are being mitigated for the application are properly tested before put into production. It's up to the CSP to fence the PersistentVolume attached to an isolated node when a new \"NodePublish\" request comes in. Node isolation is the most dangerous scenario as the workload continues to run on the node when disconnected from the outside world. Simply shutdown the kubelet to test this scenario and ensure the block device become inaccessible to the isolated node.","title":"Pod Inclusion"},{"location":"csi_driver/monitor.html#limitations","text":"Kubernetes provide automatic recovery for your applications, not high availability. Expect applications to take minutes (up to 8 minutes with the default tolerations for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable ) to fully recover during a node failure or network partition using the Pod Monitor for Pods with PersistentVolumeClaims . HPE CSI Driver 2.3.0 to 2.4.1 are inffective on StatefulSets due to an upstream API update that did not take the force flag into account. Using the Pod Monitor on a workload controller besides a Deployment configured with .spec.strategy.type \"Recreate\" or a StatefulSet is unsupported. The consequence of using other settings and controllers may have undesired side effects such as rendering \"multi-attach\" errors for PersistentVolumeClaims and may delay recovery.","title":"Limitations"},{"location":"csi_driver/operations.html","text":"Overview \u00b6 The documentation in this section illustrates officially HPE supported procedures to perform maintenance tasks on the CSI driver outside the scope of deploying and uninstalling the driver. Overview Migrate Encrypted Volumes Assumptions Prepare the Workload and Persistent Volume Claims Create a new Persistent Volume Claim and Update Retain Policies Important Validation Steps Copy Persistent Volume Claim and Reset PVCs with volumeMode: Filesystem PVCs with volumeMode: Block Restart the Workload Optional Workflow with Filesystem Persistent Volume Claims Upgrade NFS Servers Upgrade to v2.5.0 Upgrade to v2.4.2 Upgrade to v2.4.1 Assumptions Patch Running NFS Servers Validation Manual Node Configuration Stages of Initialization disableNodeConformance disableNodeConfiguration Mandatory Configuration iSCSI Configuration iscsid.conf Multipath Configuration multipath.conf Important Considerations Expose NFS Services Outside of the Kubernetes Cluster From ClusterIP to LoadBalancer MetalLB Example Mount the NFS Server from an NFS Client Migrate Encrypted Volumes \u00b6 Persistent volumes created with v2.1.1 or below using volume encryption , the CSI driver use LUKS2 (WikiPedia: Linux Unified Key Setup ) and can't expand the PersistentVolumeClaim . With v2.2.0 and above, LUKS1 is used and the CSI driver is capable of expanding the PVC . This procedure migrate (copy) data from LUKS2 to LUKS1 PVCs to allow expansion of the volume. Note It's not a limitation of LUKS2 to not allow expansion but rather how the CSI driver interact with the host. Assumptions \u00b6 These are the assumptions made throughout this procedure. Data to be migrated has a good backup to restore to, not just a snapshot. HPE CSI Driver for Kubernetes v2.3.0 or later installed. Worker nodes with access to the Quay registry and SCOD. Access to the commands kubectl , curl , jq and yq . Cluster privileges to manipulate PersistentVolumes . None of the commands executed should return errors or have non-zero exit codes. Only ReadWriteOnce PVCs are covered. No custom PVC annotations. Tip There are many different ways to copy PVCs . These steps outlines and uses one particular method developed and tested by HPE and similar workflows may be applied with other tools and procedures. Prepare the Workload and Persistent Volume Claims \u00b6 First, identify the PersistentVolume to migrate from and set shell variables. export OLD_SRC_PVC= export OLD_SRC_PV=$(kubectl get pvc -o json | \\ jq -r \".items[] | \\ select(.metadata.name | \\ test(\\\"${OLD_SRC_PVC}\\\"))\".spec.volumeName) Important Ensure these shell variables are set at all times. In order to copy data out of a PVC , the running workload needs to be disassociated with the PVC . It's not possible to scale the replicas to zero, the exception being ReadWriteMany PVCs which could lead to data inconsistency problems. These procedures assumes application consistency by having the workload shut down. It's out of scope for this procedure to demonstrate how to shut down a particular workload. Ensure there are no volumeattachments associated with the PersistentVolume . kubectl get volumeattachment -o json | \\ jq -r \".items[] | \\ select(.spec.source.persistentVolumeName | \\ test(\\\"${OLD_SRC_PV}\\\"))\".spec.source Tip For large volumeMode: Filesystem PVCs where copying data may take days, it's recommended to use the Optional Workflow with Filesystem Persistent Volume Claims that utilizes the PVC dataSource capability. Create a new Persistent Volume Claim and Update Retain Policies \u00b6 Create a new PVC named \"new-pvc\" with enough space to host the data from the old source PVC . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: new-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 32Gi volumeMode: Filesystem Important If the source PVC is a raw block volume, ensure volumeMode: Block is set on the new PVC . Edit and set the shell variables for the newly created PVC . export NEW_DST_PVC_SIZE=32Gi export NEW_DST_PVC_VOLMODE=Filesystem export NEW_DST_PVC=new-pvc export NEW_DST_PV=$(kubectl get pvc -o json | \\ jq -r \".items[] | \\ select(.metadata.name | \\ test(\\\"${NEW_DST_PVC}\\\"))\".spec.volumeName) Hint The PVC name \"new-pvc\" is a placeholder name. When the procedure is done, the PVC will have its original name restored. Important Validation Steps \u00b6 At this point, there should be six shell variables declared. Example: $ env | grep _PV NEW_DST_PVC_SIZE=32Gi NEW_DST_PVC=new-pvc OLD_SRC_PVC=old-pvc <-- This should be the original name of the PVC NEW_DST_PVC_VOLMODE=Filesystem NEW_DST_PV=pvc-ad7a05a9-c410-4c63-b997-51fb9fc473bf OLD_SRC_PV=pvc-ca7c2f64-641d-4265-90f8-4aed888bd2c5 Regardless of the retainPolicy set in the StorageClass , ensure the persistentVolumeReclaimPolicy is set to \"Retain\" for both PVs . kubectl patch pv/${OLD_SRC_PV} pv/${NEW_DST_PV} \\ -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}' Data Loss Warning It's EXTREMELY important no errors are returned from the above command. It WILL lead to data loss. Validate the \"persistentVolumeReclaimPolicy\". kubectl get pv/${OLD_SRC_PV} pv/${NEW_DST_PV} -o json | \\ jq -r \".items[] | \\ select(.metadata.name)\".spec.persistentVolumeReclaimPolicy Important The above command should output nothing but two lines with the word \"Retain\" on it. Copy Persistent Volume Claim and Reset \u00b6 In this phase, the data will be copied from the original PVC to the new PVC with a Job submitted to the cluster. Different tools are being used to perform the copy operation, ensure to pick the correct volumeMode . PVCs with volumeMode: Filesystem \u00b6 curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy-file.yaml | \\ yq \"( select(.spec.template.spec.volumes[] | \\ select(.name == \\\"src-pv\\\") | \\ .persistentVolumeClaim.claimName = \\\"${OLD_SRC_PVC}\\\") \" | kubectl apply -f- Wait for the Job to complete. kubectl get job.batch/pvc-copy-file -w Once the Job has completed, validate exit status and log files. kubectl get job.batch/pvc-copy-file -o jsonpath='{.status.succeeded}' kubectl logs job.batch/pvc-copy-file Delete the Job from the cluster. kubectl delete job.batch/pvc-copy-file Proceed to restart the workload . PVCs with volumeMode: Block \u00b6 curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy-block.yaml | \\ yq \"( select(.spec.template.spec.volumes[] | \\ select(.name == \\\"src-pv\\\") | \\ .persistentVolumeClaim.claimName = \\\"${OLD_SRC_PVC}\\\") \" | kubectl apply -f- Wait for the Job to complete. kubectl get job.batch/pvc-copy-block -w Hint Data is copied block for block, verbatim, regardless of how much application data is stored in the block devices. Once the Job has completed, validate exit status and log files. kubectl get job.batch/pvc-copy-block -o jsonpath='{.status.succeeded}' kubectl logs job.batch/pvc-copy-block Delete the Job from the cluster. kubectl delete job.batch/pvc-copy-block Proceed to restart the workload . Restart the Workload \u00b6 This step requires both the old source PVC and the new destination PVC to be deleted. Once again, ensure the correct persistentVolumeReclaimPolicy is set on the PVs . kubectl get pv/${OLD_SRC_PV} pv/${NEW_DST_PV} -o json | \\ jq -r \".items[] | \\ select(.metadata.name)\".spec.persistentVolumeReclaimPolicy Important The above command should output nothing but two lines with the word \"Retain\" on it, if not revisit Important Validation Steps to apply the policy and ensure environment variables are set correctly. Delete the PVCs . kubectl delete pvc/${OLD_SRC_PVC} pvc/${NEW_DST_PVC} Next, allow the new PV to be reclaimed. kubectl patch pv ${NEW_DST_PV} -p '{\"spec\":{\"claimRef\": null }}' Next, create a PVC with the old source name and ensure it matches the size of the new destination PVC . curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy.yaml | \\ yq \".spec.volumeName = \\\"${NEW_DST_PV}\\\" | \\ .metadata.name = \\\"${OLD_SRC_PVC}\\\" | \\ .spec.volumeMode = \\\"${NEW_DST_PVC_VOLMODE}\\\" | \\ .spec.resources.requests.storage = \\\"${NEW_DST_PVC_SIZE}\\\" \\ \" | kubectl apply -f- Verify the new PVC is \"Bound\" to the correct PV . kubectl get pvc/${OLD_SRC_PVC} -o json | \\ jq -r \". | \\ select(.spec.volumeName == \\\"${NEW_DST_PV}\\\").metadata.name\" If the command is successful, it should output your original PVC name. At this point the original workload should be deployed, verified and resumed. Optionally, the old source PV may be removed. kubectl delete pv/${OLD_SRC_PV} Optional Workflow with Filesystem Persistent Volume Claims \u00b6 If there's a lot of content (millions of files, terabytes of data) that needs to be transferred in a volumeMode: Filesystem PVC it's recommended to transfer content incrementally. This is achieved by substituting the \"old-pvc\" with a dataSource clone of the running workload and perform the copy from the clone onto the \"new-pvc\". After the first transfer completes, the copy job may be recreated as many times as needed with a fresh clone of \"old-pvc\" until the downtime window has shrunk to an acceptable duration. For the final transfer, the actual source PVC will be used instead of the clone. This is an example PVC . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: clone-of-pvc spec: dataSource: name: this-is-the-current-prod-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 32Gi Tip The capacity of the dataSource clone must match the original PVC . Enabling and setting up the CSI snapshotter and related CRDs is not necessary but it's recommended to be familiar with using CSI snapshots . Upgrade NFS Servers \u00b6 In the event the CSI driver contains updates to the NFS Server Provisioner, any running NFS server needs to be updated manually. Upgrade to v2.5.0 \u00b6 Any prior deployed NFS servers may be upgraded to v2.5.0. Upgrade to v2.4.2 \u00b6 No changes to NFS Server Provisioner image between v2.4.1 and v2.4.2. Upgrade to v2.4.1 \u00b6 Any prior deployed NFS servers may be upgraded to v2.4.1. Important With v2.4.0 and onwards the NFS servers are deployed with default resource limits and in v2.5.0 resource requests were added. Those won't be applied on running NFS servers, only new ones. Assumptions \u00b6 HPE CSI Driver or Operator v2.4.1 installed. All running NFS servers are running in the \"hpe-nfs\" Namespace . Worker nodes with access to the Quay registry and SCOD. Access to the commands kubectl , yq and curl . Cluster privileges to manipulate resources in the \"hpe-nfs\" Namespace . None of the commands executed should return errors or have non-zero exit codes. Seealso If NFS Deployments are scattered across Namespaces , use the Validation steps to find where they reside. Patch Running NFS Servers \u00b6 When patching the NFS Deployments , the Pods will restart and cause a pause in I/O for the NFS clients with active mounts. The clients will recover gracefully once the NFS Pod is running again. Patch all NFS Deployments with the following. curl -s https://scod.hpedev.io/csi_driver/examples/operations/patch-nfs-server-2.5.0.yaml | \\ kubectl patch -n hpe-nfs \\ $(kubectl get deploy -n hpe-nfs -o name) \\ --patch-file=/dev/stdin Tip If it's desired to patch one NFS Deployment at a time, replace the shell substituion with a Deployment name. Validation \u00b6 This command will list all \"hpe-nfs\" Deployments across the entire cluster. Each Deployment should be using v3.0.5 of the \"nfs-provisioner\" image after the uprade is complete. kubectl get deploy -A -o yaml | \\ yq -r '.items[] | [] + { \"Namespace\": select(.spec.template.spec.containers[].name == \"hpe-nfs\").metadata.namespace, \"Deployment\": select(.spec.template.spec.containers[].name == \"hpe-nfs\").metadata.name, \"Image\": select(.spec.template.spec.containers[].name == \"hpe-nfs\").spec.template.spec.containers[].image }' Note The above line is very long. Manual Node Configuration \u00b6 With the release of HPE CSI Driver v2.4.0 it's possible to completely disable the node conformance and node configuration performed by the CSI node driver at startup. This transfers the responsibilty from the HPE CSI Driver to the Kubernetes cluster administrator to ensure worker nodes boot with a supported configuration. Important This feature is mainly for users who require 100% control of the worker nodes. Stages of Initialization \u00b6 There are two stages of initialization the administrator can control through parameters in the Helm chart. disableNodeConformance \u00b6 The node conformance runs with the entrypoint of the node driver container. The conformance inserts and runs a systemd service on the node that installs all required packages on the node to allow nodes to attach block storage devices and mount NFS exports. It starts all the required services and configure an important udev rule on the worker node. This flag was intended to allow administrators to run the CSI driver on nodes with an unsupported or unconfigured package manager. If node conformance needs to be disabled for any reason, these packages and services needs to be installed and running prior to installing the HPE CSI Driver: iSCSI (not necessary when using FC) Multipath XFS programs/utilities NFSv4 client Package names and services vary greatly between different Linux distributions and it's the system administrator's duty to ensure these are available to the HPE CSI Driver. disableNodeConfiguration \u00b6 When disabling node configuration the CSI node driver will not touch the node at all. Besides indirectly disabling node conformance, all attempts to write configuration files or manipulate services during runtime are disabled. Mandatory Configuration \u00b6 These steps are REQUIRED for disabling either node configuration or conformance. On each current and future worker node in the cluster: # Don't let udev automatically scan targets(all luns) on Unit Attention. # This will prevent udev scanning devices which we are attempting to remove. if [ -f /lib/udev/rules.d/90-scsi-ua.rules ]; then sed -i 's/^[^#]*scan-scsi-target/#&/' /lib/udev/rules.d/90-scsi-ua.rules udevadm control --reload-rules fi iSCSI Configuration \u00b6 Skip this step if only Fibre Channel is being used. This step is only required when node configuration is disabled. iscsid.conf \u00b6 This example is taken from a Rocky Linux 9.2 node with the HPE parameters applied. Certain parameters may differ for other distributions of either iSCSI or the host OS. Note The location of this file varies between Linux and iSCSI distributions. Ensure iscsid is stopped. systemctl stop iscsid Download : /etc/iscsi/iscsid.conf iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket node.startup = manual node.leading_login = No node.session.timeo.replacement_timeout = 10 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 10 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 30 node.session.err_timeo.tgt_reset_timeout = 30 node.session.initial_login_retry_max = 8 node.session.cmds_max = 512 node.session.queue_depth = 256 node.session.xmit_thread_priority = -20 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 node.conn[0].iscsi.MaxXmitDataSegmentLength = 0 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.conn[0].iscsi.HeaderDigest = None node.session.nr_sessions = 1 node.session.reopen_max = 0 node.session.iscsi.FastAbort = Yes node.session.scan = auto Pro tip! When nodes are provisioned from some sort of templating system with iSCSI pre-installed, it's notoriously common that nodes are provisioned with identical IQNs. This will lead to device attachment problems that aren't obvious to the user. Make sure each node has a unique IQN. Ensure iscsid is running and enabled: systemctl enable --now iscsid Seealso Some Linux distributions requires the iscsi_tcp kernel module to be loaded. Where kernel modules are loaded varies between Linux distributions. Multipath Configuration \u00b6 This step is only required when node configuration is disabled. multipath.conf \u00b6 The defaults section of the configuration file is merely a preference, make sure to leave the device and blacklist stanzas intact when potentially adding more entries from foreign devices. Note The location of this file varies between Linux and iSCSI distributions. Ensure multipathd is stopped. systemctl stop multipathd Download : /etc/multipath.conf defaults { user_friendly_names yes find_multipaths no uxsock_timeout 10000 } blacklist { devnode \"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*\" devnode \"^hd[a-z]\" device { product \".*\" vendor \".*\" } } blacklist_exceptions { property \"(ID_WWN|SCSI_IDENT_.*|ID_SERIAL)\" device { vendor \"Nimble\" product \"Server\" } device { product \"VV\" vendor \"3PARdata\" } device { vendor \"TrueNAS\" product \"iSCSI Disk\" } device { vendor \"FreeNAS\" product \"iSCSI Disk\" } } devices { device { product \"Server\" rr_min_io_rq 1 dev_loss_tmo infinity path_checker tur rr_weight uniform no_path_retry 30 path_selector \"service-time 0\" failback immediate fast_io_fail_tmo 5 vendor \"Nimble\" hardware_handler \"1 alua\" path_grouping_policy group_by_prio prio alua } device { path_grouping_policy group_by_prio path_checker tur rr_weight \"uniform\" prio alua failback immediate hardware_handler \"1 alua\" no_path_retry 18 fast_io_fail_tmo 10 path_selector \"round-robin 0\" vendor \"3PARdata\" dev_loss_tmo infinity detect_prio yes features \"0\" rr_min_io_rq 1 product \"VV\" } device { path_selector \"queue-length 0\" rr_weight priorities uid_attribute ID_SERIAL vendor \"TrueNAS\" product \"iSCSI Disk\" path_grouping_policy group_by_prio } device { path_selector \"queue-length 0\" hardware_handler \"1 alua\" rr_weight priorities uid_attribute ID_SERIAL vendor \"FreeNAS\" product \"iSCSI Disk\" path_grouping_policy group_by_prio } } Ensure multipathd is running and enabled: systemctl enable --now multipathd Important Considerations \u00b6 While both disabling conformance and configuration parameters lends itself to a more predictable behaviour when deploying nodes from templates with less runtime configuration, it's still not a complete solution for having immutable nodes. The CSI node driver creates a unique identity for the node and stores it in /etc/hpe-storage/node.gob . This file must persist across reboots and redeployments of the node OS image. Immutable Linux distributions such as CoreOS persist the /etc directory, some don't. Expose NFS Services Outside of the Kubernetes Cluster \u00b6 In certain situations it's practical to expose the NFS exports outside the Kubernetes cluster to allow external applications to access data as part of an ETL (Extract, Transform, Load) pipeline or similar. Since this is an untested feature with questionable security standards, HPE does not recommend using this facility in production at this time. Reach out to your HPE account representative if this is a critical feature for your workloads. Danger The exports on the NFS servers does not have any network Access Control Lists (ACL) without root squash. Anyone with an NFS client that can reach the load balancer IP address have full access to the filesystem. From ClusterIP to LoadBalancer \u00b6 The NFS server Service must be transformed into a \"LoadBalancer\". In this example we'll assume a \"RWX\" PersistentVolumeClaim named \"my-pvc-1\" and NFS resources deployed in the default Namespace , \"hpe-nfs\". Retrieve NFS UUID export UUID=$(kubectl get pvc my-pvc-1 -o jsonpath='{.spec.volumeName}{\"\\n\"}' | awk -Fpvc- '{print $2}') Patch the NFS Service : kubectl patch -n hpe-nfs svc/hpe-nfs-${UUID} -p '{\"spec\":{\"type\": \"LoadBalancer\"}}' The Service will be assigned an external IP address by the load balancer deployed in the cluster. If there is no load balancer deployed, a MetalLB example is provided below. MetalLB Example \u00b6 Deploying MetalLB is outside the scope of this document. In this example, MetalLB was deployed on OpenShift 4.16 (Kubernetes v1.29) using the Operator provided by Red Hat in the \"metallb-system\" Namespace . Determine the IP address range that will be assigned to the load balancers. In this example, 192.168.1.40 to 192.168.1.60 is being used. Note that the worker nodes in this cluster already have reachable IP addresses in the 192.168.1.0/24 network, which is a requirement. Create the MetalLB instances, IP address pool and Layer 2 advertisement. --- apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: hpe-nfs-servers spec: protocol: layer2 addresses: - 192.168.1.40-192.168.1.60 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - hpe-nfs-servers Shortly, the external IP address of the NFS Service patched in the previous steps should have an IP address assigned. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) hpe-nfs-UUID LoadBalancer 172.30.217.203 192.168.1.40 Mount the NFS Server from an NFS Client \u00b6 Mounting the NFS export externally is now possible. As root: mount -t nfs4 192.168.1.40:/export /mnt Note If the NFS server is rescheduled in the Kubernetes cluster, the load balancer IP address follows, and the client will recover and resume IO after a few minutes.","title":"Auxiliary Operations"},{"location":"csi_driver/operations.html#overview","text":"The documentation in this section illustrates officially HPE supported procedures to perform maintenance tasks on the CSI driver outside the scope of deploying and uninstalling the driver. Overview Migrate Encrypted Volumes Assumptions Prepare the Workload and Persistent Volume Claims Create a new Persistent Volume Claim and Update Retain Policies Important Validation Steps Copy Persistent Volume Claim and Reset PVCs with volumeMode: Filesystem PVCs with volumeMode: Block Restart the Workload Optional Workflow with Filesystem Persistent Volume Claims Upgrade NFS Servers Upgrade to v2.5.0 Upgrade to v2.4.2 Upgrade to v2.4.1 Assumptions Patch Running NFS Servers Validation Manual Node Configuration Stages of Initialization disableNodeConformance disableNodeConfiguration Mandatory Configuration iSCSI Configuration iscsid.conf Multipath Configuration multipath.conf Important Considerations Expose NFS Services Outside of the Kubernetes Cluster From ClusterIP to LoadBalancer MetalLB Example Mount the NFS Server from an NFS Client","title":"Overview"},{"location":"csi_driver/operations.html#migrate_encrypted_volumes","text":"Persistent volumes created with v2.1.1 or below using volume encryption , the CSI driver use LUKS2 (WikiPedia: Linux Unified Key Setup ) and can't expand the PersistentVolumeClaim . With v2.2.0 and above, LUKS1 is used and the CSI driver is capable of expanding the PVC . This procedure migrate (copy) data from LUKS2 to LUKS1 PVCs to allow expansion of the volume. Note It's not a limitation of LUKS2 to not allow expansion but rather how the CSI driver interact with the host.","title":"Migrate Encrypted Volumes"},{"location":"csi_driver/operations.html#assumptions","text":"These are the assumptions made throughout this procedure. Data to be migrated has a good backup to restore to, not just a snapshot. HPE CSI Driver for Kubernetes v2.3.0 or later installed. Worker nodes with access to the Quay registry and SCOD. Access to the commands kubectl , curl , jq and yq . Cluster privileges to manipulate PersistentVolumes . None of the commands executed should return errors or have non-zero exit codes. Only ReadWriteOnce PVCs are covered. No custom PVC annotations. Tip There are many different ways to copy PVCs . These steps outlines and uses one particular method developed and tested by HPE and similar workflows may be applied with other tools and procedures.","title":"Assumptions"},{"location":"csi_driver/operations.html#prepare_the_workload_and_persistent_volume_claims","text":"First, identify the PersistentVolume to migrate from and set shell variables. export OLD_SRC_PVC= export OLD_SRC_PV=$(kubectl get pvc -o json | \\ jq -r \".items[] | \\ select(.metadata.name | \\ test(\\\"${OLD_SRC_PVC}\\\"))\".spec.volumeName) Important Ensure these shell variables are set at all times. In order to copy data out of a PVC , the running workload needs to be disassociated with the PVC . It's not possible to scale the replicas to zero, the exception being ReadWriteMany PVCs which could lead to data inconsistency problems. These procedures assumes application consistency by having the workload shut down. It's out of scope for this procedure to demonstrate how to shut down a particular workload. Ensure there are no volumeattachments associated with the PersistentVolume . kubectl get volumeattachment -o json | \\ jq -r \".items[] | \\ select(.spec.source.persistentVolumeName | \\ test(\\\"${OLD_SRC_PV}\\\"))\".spec.source Tip For large volumeMode: Filesystem PVCs where copying data may take days, it's recommended to use the Optional Workflow with Filesystem Persistent Volume Claims that utilizes the PVC dataSource capability.","title":"Prepare the Workload and Persistent Volume Claims"},{"location":"csi_driver/operations.html#create_a_new_persistent_volume_claim_and_update_retain_policies","text":"Create a new PVC named \"new-pvc\" with enough space to host the data from the old source PVC . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: new-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 32Gi volumeMode: Filesystem Important If the source PVC is a raw block volume, ensure volumeMode: Block is set on the new PVC . Edit and set the shell variables for the newly created PVC . export NEW_DST_PVC_SIZE=32Gi export NEW_DST_PVC_VOLMODE=Filesystem export NEW_DST_PVC=new-pvc export NEW_DST_PV=$(kubectl get pvc -o json | \\ jq -r \".items[] | \\ select(.metadata.name | \\ test(\\\"${NEW_DST_PVC}\\\"))\".spec.volumeName) Hint The PVC name \"new-pvc\" is a placeholder name. When the procedure is done, the PVC will have its original name restored.","title":"Create a new Persistent Volume Claim and Update Retain Policies"},{"location":"csi_driver/operations.html#important_validation_steps","text":"At this point, there should be six shell variables declared. Example: $ env | grep _PV NEW_DST_PVC_SIZE=32Gi NEW_DST_PVC=new-pvc OLD_SRC_PVC=old-pvc <-- This should be the original name of the PVC NEW_DST_PVC_VOLMODE=Filesystem NEW_DST_PV=pvc-ad7a05a9-c410-4c63-b997-51fb9fc473bf OLD_SRC_PV=pvc-ca7c2f64-641d-4265-90f8-4aed888bd2c5 Regardless of the retainPolicy set in the StorageClass , ensure the persistentVolumeReclaimPolicy is set to \"Retain\" for both PVs . kubectl patch pv/${OLD_SRC_PV} pv/${NEW_DST_PV} \\ -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}' Data Loss Warning It's EXTREMELY important no errors are returned from the above command. It WILL lead to data loss. Validate the \"persistentVolumeReclaimPolicy\". kubectl get pv/${OLD_SRC_PV} pv/${NEW_DST_PV} -o json | \\ jq -r \".items[] | \\ select(.metadata.name)\".spec.persistentVolumeReclaimPolicy Important The above command should output nothing but two lines with the word \"Retain\" on it.","title":"Important Validation Steps"},{"location":"csi_driver/operations.html#copy_persistent_volume_claim_and_reset","text":"In this phase, the data will be copied from the original PVC to the new PVC with a Job submitted to the cluster. Different tools are being used to perform the copy operation, ensure to pick the correct volumeMode .","title":"Copy Persistent Volume Claim and Reset"},{"location":"csi_driver/operations.html#pvcs_with_volumemode_filesystem","text":"curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy-file.yaml | \\ yq \"( select(.spec.template.spec.volumes[] | \\ select(.name == \\\"src-pv\\\") | \\ .persistentVolumeClaim.claimName = \\\"${OLD_SRC_PVC}\\\") \" | kubectl apply -f- Wait for the Job to complete. kubectl get job.batch/pvc-copy-file -w Once the Job has completed, validate exit status and log files. kubectl get job.batch/pvc-copy-file -o jsonpath='{.status.succeeded}' kubectl logs job.batch/pvc-copy-file Delete the Job from the cluster. kubectl delete job.batch/pvc-copy-file Proceed to restart the workload .","title":"PVCs with volumeMode: Filesystem"},{"location":"csi_driver/operations.html#pvcs_with_volumemode_block","text":"curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy-block.yaml | \\ yq \"( select(.spec.template.spec.volumes[] | \\ select(.name == \\\"src-pv\\\") | \\ .persistentVolumeClaim.claimName = \\\"${OLD_SRC_PVC}\\\") \" | kubectl apply -f- Wait for the Job to complete. kubectl get job.batch/pvc-copy-block -w Hint Data is copied block for block, verbatim, regardless of how much application data is stored in the block devices. Once the Job has completed, validate exit status and log files. kubectl get job.batch/pvc-copy-block -o jsonpath='{.status.succeeded}' kubectl logs job.batch/pvc-copy-block Delete the Job from the cluster. kubectl delete job.batch/pvc-copy-block Proceed to restart the workload .","title":"PVCs with volumeMode: Block"},{"location":"csi_driver/operations.html#restart_the_workload","text":"This step requires both the old source PVC and the new destination PVC to be deleted. Once again, ensure the correct persistentVolumeReclaimPolicy is set on the PVs . kubectl get pv/${OLD_SRC_PV} pv/${NEW_DST_PV} -o json | \\ jq -r \".items[] | \\ select(.metadata.name)\".spec.persistentVolumeReclaimPolicy Important The above command should output nothing but two lines with the word \"Retain\" on it, if not revisit Important Validation Steps to apply the policy and ensure environment variables are set correctly. Delete the PVCs . kubectl delete pvc/${OLD_SRC_PVC} pvc/${NEW_DST_PVC} Next, allow the new PV to be reclaimed. kubectl patch pv ${NEW_DST_PV} -p '{\"spec\":{\"claimRef\": null }}' Next, create a PVC with the old source name and ensure it matches the size of the new destination PVC . curl -s https://scod.hpedev.io/csi_driver/examples/operations/pvc-copy.yaml | \\ yq \".spec.volumeName = \\\"${NEW_DST_PV}\\\" | \\ .metadata.name = \\\"${OLD_SRC_PVC}\\\" | \\ .spec.volumeMode = \\\"${NEW_DST_PVC_VOLMODE}\\\" | \\ .spec.resources.requests.storage = \\\"${NEW_DST_PVC_SIZE}\\\" \\ \" | kubectl apply -f- Verify the new PVC is \"Bound\" to the correct PV . kubectl get pvc/${OLD_SRC_PVC} -o json | \\ jq -r \". | \\ select(.spec.volumeName == \\\"${NEW_DST_PV}\\\").metadata.name\" If the command is successful, it should output your original PVC name. At this point the original workload should be deployed, verified and resumed. Optionally, the old source PV may be removed. kubectl delete pv/${OLD_SRC_PV}","title":"Restart the Workload"},{"location":"csi_driver/operations.html#optional_workflow_with_filesystem_persistent_volume_claims","text":"If there's a lot of content (millions of files, terabytes of data) that needs to be transferred in a volumeMode: Filesystem PVC it's recommended to transfer content incrementally. This is achieved by substituting the \"old-pvc\" with a dataSource clone of the running workload and perform the copy from the clone onto the \"new-pvc\". After the first transfer completes, the copy job may be recreated as many times as needed with a fresh clone of \"old-pvc\" until the downtime window has shrunk to an acceptable duration. For the final transfer, the actual source PVC will be used instead of the clone. This is an example PVC . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: clone-of-pvc spec: dataSource: name: this-is-the-current-prod-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 32Gi Tip The capacity of the dataSource clone must match the original PVC . Enabling and setting up the CSI snapshotter and related CRDs is not necessary but it's recommended to be familiar with using CSI snapshots .","title":"Optional Workflow with Filesystem Persistent Volume Claims"},{"location":"csi_driver/operations.html#upgrade_nfs_servers","text":"In the event the CSI driver contains updates to the NFS Server Provisioner, any running NFS server needs to be updated manually.","title":"Upgrade NFS Servers"},{"location":"csi_driver/operations.html#upgrade_to_v250","text":"Any prior deployed NFS servers may be upgraded to v2.5.0.","title":"Upgrade to v2.5.0"},{"location":"csi_driver/operations.html#upgrade_to_v242","text":"No changes to NFS Server Provisioner image between v2.4.1 and v2.4.2.","title":"Upgrade to v2.4.2"},{"location":"csi_driver/operations.html#upgrade_to_v241","text":"Any prior deployed NFS servers may be upgraded to v2.4.1. Important With v2.4.0 and onwards the NFS servers are deployed with default resource limits and in v2.5.0 resource requests were added. Those won't be applied on running NFS servers, only new ones.","title":"Upgrade to v2.4.1"},{"location":"csi_driver/operations.html#assumptions_1","text":"HPE CSI Driver or Operator v2.4.1 installed. All running NFS servers are running in the \"hpe-nfs\" Namespace . Worker nodes with access to the Quay registry and SCOD. Access to the commands kubectl , yq and curl . Cluster privileges to manipulate resources in the \"hpe-nfs\" Namespace . None of the commands executed should return errors or have non-zero exit codes. Seealso If NFS Deployments are scattered across Namespaces , use the Validation steps to find where they reside.","title":"Assumptions"},{"location":"csi_driver/operations.html#patch_running_nfs_servers","text":"When patching the NFS Deployments , the Pods will restart and cause a pause in I/O for the NFS clients with active mounts. The clients will recover gracefully once the NFS Pod is running again. Patch all NFS Deployments with the following. curl -s https://scod.hpedev.io/csi_driver/examples/operations/patch-nfs-server-2.5.0.yaml | \\ kubectl patch -n hpe-nfs \\ $(kubectl get deploy -n hpe-nfs -o name) \\ --patch-file=/dev/stdin Tip If it's desired to patch one NFS Deployment at a time, replace the shell substituion with a Deployment name.","title":"Patch Running NFS Servers"},{"location":"csi_driver/operations.html#validation","text":"This command will list all \"hpe-nfs\" Deployments across the entire cluster. Each Deployment should be using v3.0.5 of the \"nfs-provisioner\" image after the uprade is complete. kubectl get deploy -A -o yaml | \\ yq -r '.items[] | [] + { \"Namespace\": select(.spec.template.spec.containers[].name == \"hpe-nfs\").metadata.namespace, \"Deployment\": select(.spec.template.spec.containers[].name == \"hpe-nfs\").metadata.name, \"Image\": select(.spec.template.spec.containers[].name == \"hpe-nfs\").spec.template.spec.containers[].image }' Note The above line is very long.","title":"Validation"},{"location":"csi_driver/operations.html#manual_node_configuration","text":"With the release of HPE CSI Driver v2.4.0 it's possible to completely disable the node conformance and node configuration performed by the CSI node driver at startup. This transfers the responsibilty from the HPE CSI Driver to the Kubernetes cluster administrator to ensure worker nodes boot with a supported configuration. Important This feature is mainly for users who require 100% control of the worker nodes.","title":"Manual Node Configuration"},{"location":"csi_driver/operations.html#stages_of_initialization","text":"There are two stages of initialization the administrator can control through parameters in the Helm chart.","title":"Stages of Initialization"},{"location":"csi_driver/operations.html#disablenodeconformance","text":"The node conformance runs with the entrypoint of the node driver container. The conformance inserts and runs a systemd service on the node that installs all required packages on the node to allow nodes to attach block storage devices and mount NFS exports. It starts all the required services and configure an important udev rule on the worker node. This flag was intended to allow administrators to run the CSI driver on nodes with an unsupported or unconfigured package manager. If node conformance needs to be disabled for any reason, these packages and services needs to be installed and running prior to installing the HPE CSI Driver: iSCSI (not necessary when using FC) Multipath XFS programs/utilities NFSv4 client Package names and services vary greatly between different Linux distributions and it's the system administrator's duty to ensure these are available to the HPE CSI Driver.","title":"disableNodeConformance"},{"location":"csi_driver/operations.html#disablenodeconfiguration","text":"When disabling node configuration the CSI node driver will not touch the node at all. Besides indirectly disabling node conformance, all attempts to write configuration files or manipulate services during runtime are disabled.","title":"disableNodeConfiguration"},{"location":"csi_driver/operations.html#mandatory_configuration","text":"These steps are REQUIRED for disabling either node configuration or conformance. On each current and future worker node in the cluster: # Don't let udev automatically scan targets(all luns) on Unit Attention. # This will prevent udev scanning devices which we are attempting to remove. if [ -f /lib/udev/rules.d/90-scsi-ua.rules ]; then sed -i 's/^[^#]*scan-scsi-target/#&/' /lib/udev/rules.d/90-scsi-ua.rules udevadm control --reload-rules fi","title":"Mandatory Configuration"},{"location":"csi_driver/operations.html#iscsi_configuration","text":"Skip this step if only Fibre Channel is being used. This step is only required when node configuration is disabled.","title":"iSCSI Configuration"},{"location":"csi_driver/operations.html#iscsidconf","text":"This example is taken from a Rocky Linux 9.2 node with the HPE parameters applied. Certain parameters may differ for other distributions of either iSCSI or the host OS. Note The location of this file varies between Linux and iSCSI distributions. Ensure iscsid is stopped. systemctl stop iscsid Download : /etc/iscsi/iscsid.conf iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket node.startup = manual node.leading_login = No node.session.timeo.replacement_timeout = 10 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 10 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 30 node.session.err_timeo.tgt_reset_timeout = 30 node.session.initial_login_retry_max = 8 node.session.cmds_max = 512 node.session.queue_depth = 256 node.session.xmit_thread_priority = -20 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 node.conn[0].iscsi.MaxXmitDataSegmentLength = 0 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.conn[0].iscsi.HeaderDigest = None node.session.nr_sessions = 1 node.session.reopen_max = 0 node.session.iscsi.FastAbort = Yes node.session.scan = auto Pro tip! When nodes are provisioned from some sort of templating system with iSCSI pre-installed, it's notoriously common that nodes are provisioned with identical IQNs. This will lead to device attachment problems that aren't obvious to the user. Make sure each node has a unique IQN. Ensure iscsid is running and enabled: systemctl enable --now iscsid Seealso Some Linux distributions requires the iscsi_tcp kernel module to be loaded. Where kernel modules are loaded varies between Linux distributions.","title":"iscsid.conf"},{"location":"csi_driver/operations.html#multipath_configuration","text":"This step is only required when node configuration is disabled.","title":"Multipath Configuration"},{"location":"csi_driver/operations.html#multipathconf","text":"The defaults section of the configuration file is merely a preference, make sure to leave the device and blacklist stanzas intact when potentially adding more entries from foreign devices. Note The location of this file varies between Linux and iSCSI distributions. Ensure multipathd is stopped. systemctl stop multipathd Download : /etc/multipath.conf defaults { user_friendly_names yes find_multipaths no uxsock_timeout 10000 } blacklist { devnode \"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*\" devnode \"^hd[a-z]\" device { product \".*\" vendor \".*\" } } blacklist_exceptions { property \"(ID_WWN|SCSI_IDENT_.*|ID_SERIAL)\" device { vendor \"Nimble\" product \"Server\" } device { product \"VV\" vendor \"3PARdata\" } device { vendor \"TrueNAS\" product \"iSCSI Disk\" } device { vendor \"FreeNAS\" product \"iSCSI Disk\" } } devices { device { product \"Server\" rr_min_io_rq 1 dev_loss_tmo infinity path_checker tur rr_weight uniform no_path_retry 30 path_selector \"service-time 0\" failback immediate fast_io_fail_tmo 5 vendor \"Nimble\" hardware_handler \"1 alua\" path_grouping_policy group_by_prio prio alua } device { path_grouping_policy group_by_prio path_checker tur rr_weight \"uniform\" prio alua failback immediate hardware_handler \"1 alua\" no_path_retry 18 fast_io_fail_tmo 10 path_selector \"round-robin 0\" vendor \"3PARdata\" dev_loss_tmo infinity detect_prio yes features \"0\" rr_min_io_rq 1 product \"VV\" } device { path_selector \"queue-length 0\" rr_weight priorities uid_attribute ID_SERIAL vendor \"TrueNAS\" product \"iSCSI Disk\" path_grouping_policy group_by_prio } device { path_selector \"queue-length 0\" hardware_handler \"1 alua\" rr_weight priorities uid_attribute ID_SERIAL vendor \"FreeNAS\" product \"iSCSI Disk\" path_grouping_policy group_by_prio } } Ensure multipathd is running and enabled: systemctl enable --now multipathd","title":"multipath.conf"},{"location":"csi_driver/operations.html#important_considerations","text":"While both disabling conformance and configuration parameters lends itself to a more predictable behaviour when deploying nodes from templates with less runtime configuration, it's still not a complete solution for having immutable nodes. The CSI node driver creates a unique identity for the node and stores it in /etc/hpe-storage/node.gob . This file must persist across reboots and redeployments of the node OS image. Immutable Linux distributions such as CoreOS persist the /etc directory, some don't.","title":"Important Considerations"},{"location":"csi_driver/operations.html#expose_nfs_services_outside_of_the_kubernetes_cluster","text":"In certain situations it's practical to expose the NFS exports outside the Kubernetes cluster to allow external applications to access data as part of an ETL (Extract, Transform, Load) pipeline or similar. Since this is an untested feature with questionable security standards, HPE does not recommend using this facility in production at this time. Reach out to your HPE account representative if this is a critical feature for your workloads. Danger The exports on the NFS servers does not have any network Access Control Lists (ACL) without root squash. Anyone with an NFS client that can reach the load balancer IP address have full access to the filesystem.","title":"Expose NFS Services Outside of the Kubernetes Cluster"},{"location":"csi_driver/operations.html#from_clusterip_to_loadbalancer","text":"The NFS server Service must be transformed into a \"LoadBalancer\". In this example we'll assume a \"RWX\" PersistentVolumeClaim named \"my-pvc-1\" and NFS resources deployed in the default Namespace , \"hpe-nfs\". Retrieve NFS UUID export UUID=$(kubectl get pvc my-pvc-1 -o jsonpath='{.spec.volumeName}{\"\\n\"}' | awk -Fpvc- '{print $2}') Patch the NFS Service : kubectl patch -n hpe-nfs svc/hpe-nfs-${UUID} -p '{\"spec\":{\"type\": \"LoadBalancer\"}}' The Service will be assigned an external IP address by the load balancer deployed in the cluster. If there is no load balancer deployed, a MetalLB example is provided below.","title":"From ClusterIP to LoadBalancer"},{"location":"csi_driver/operations.html#metallb_example","text":"Deploying MetalLB is outside the scope of this document. In this example, MetalLB was deployed on OpenShift 4.16 (Kubernetes v1.29) using the Operator provided by Red Hat in the \"metallb-system\" Namespace . Determine the IP address range that will be assigned to the load balancers. In this example, 192.168.1.40 to 192.168.1.60 is being used. Note that the worker nodes in this cluster already have reachable IP addresses in the 192.168.1.0/24 network, which is a requirement. Create the MetalLB instances, IP address pool and Layer 2 advertisement. --- apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: hpe-nfs-servers spec: protocol: layer2 addresses: - 192.168.1.40-192.168.1.60 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - hpe-nfs-servers Shortly, the external IP address of the NFS Service patched in the previous steps should have an IP address assigned. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) hpe-nfs-UUID LoadBalancer 172.30.217.203 192.168.1.40 ","title":"MetalLB Example"},{"location":"csi_driver/operations.html#mount_the_nfs_server_from_an_nfs_client","text":"Mounting the NFS export externally is now possible. As root: mount -t nfs4 192.168.1.40:/export /mnt Note If the NFS server is rescheduled in the Kubernetes cluster, the load balancer IP address follows, and the client will recover and resume IO after a few minutes.","title":"Mount the NFS Server from an NFS Client"},{"location":"csi_driver/standalone_nfs.html","text":"Standalone NFS Server \u00b6 In certain situations is desirable to run the NFS Server Provisioner image without the dual PersistentVolumeClaim (PVC) semantic in a more static fashion on top of a PVC provisioned by a non-HPE CSI Driver StorageClass . Notice Since HPE CSI Driver for Kubernetes v2.4.1, this functionality is built into the CSI driver. See Using a Foreign StorageClass how to use it. Standalone NFS Server Limitations Prerequisites Create a Workspace Create an NFS Server environment.properties kustomization.yaml Change the default fsGroup Mounting the NFS Server Inline Declaration Static Provisioning Expand PVC Deleting the NFS Server Limitations \u00b6 The standalone NFS server is not part of the HPE CSI Driver and should be considered a standalone Kubernetes application altogether. The HPE CSI Driver NFS Server Provisioner NFS servers may co-exist on the same cluster and Namespace without risk of conflict but not recommended. The Pod Monitor which normally monitors Pods status for the \"NodeLost\" condition is not included with the standalone NFS server and recovery is at the mercy of the underlying storage platform and driver. Support is limited on the standalone NFS server and only available to select users. Prerequisites \u00b6 It's assumed during the creation steps that a Kubernetes cluster is available with enough permissions to deploy privileged Pods with SYS_ADMIN and DAC_READ_SEARCH capabilities. All steps are run in a terminal with kubectl and git in in the path. A default StorageClass declared on the cluster Worker nodes that will serve the NFS exports must be labeled with csi.hpe.com/hpe-nfs: \"true\" kubectl and Kubernetes v1.21 or newer Create a Workspace \u00b6 NFS server configurations are managed with the kustomize templating system. Clone this repository to get started and change working directory. git clone https://github.com/hpe-storage/scod cd scod/docs/csi_driver/examples/standalone_nfs In the current directory, various manifests and configuration directives exist to deploy and manage NFS servers. Run tree . in the current directory: . \u251c\u2500\u2500 base \u2502 \u251c\u2500\u2500 configmap.yaml \u2502 \u251c\u2500\u2500 deployment.yaml \u2502 \u251c\u2500\u2500 environment.properties \u2502 \u251c\u2500\u2500 kustomization.yaml \u2502 \u251c\u2500\u2500 pvc.yaml \u2502 \u251c\u2500\u2500 service.yaml \u2502 \u2514\u2500\u2500 values.yaml \u2514\u2500\u2500 overlays \u2514\u2500\u2500 example \u251c\u2500\u2500 deployment.yaml \u251c\u2500\u2500 environment.properties \u2514\u2500\u2500 kustomization.yaml 4 directories, 10 files Important The current directory is now the \"home\" for the remainder of this guide. Create an NFS Server \u00b6 Copy the \"example\" overlay into a new directory. In the examples \"my-server\" is used. cp -a overlays/example overlays/my-server Edit both \"environment.properties\" and \"kustomization.yaml\" in the newly created overlay. Also pay attention to if the remote Pods mounting the NFS export are running as a non-root user, if that's the case, the group ID is needed of those Pods (customizable per NFS server). environment.properties \u00b6 # This is the domain associated with worker node (not inter-cluster DNS) CLUSTER_NODE_DOMAIN_NAME=my-domain.example.com # The size of the backend RWO claim PERSISTENCE_SIZE=16Gi # Default resource limits for the NFS server NFS_SERVER_CPU_LIMIT=1 NFS_SERVER_MEMORY_LIMIT=2Gi The \"CLUSTER_NODE_DOMAIN_NAME\" variable refers to the DNS domain name that the worker node is resolvable in, not the Kubernetes cluster DNS. The \"PERSISTENCE_SIZE\" is the backend PVC size expressed in the same format accepted by a PVC . Configuring resource limits are optional but recommended for high performance workloads. kustomization.yaml \u00b6 Change the resource prefix in \"kustomization.yaml\" either with an editor or sed : sed -i\"\" 's/example-/my-server-/g' overlays/my-server/kustomization.yaml Seealso If the NFS server needs to be deployed in a different Namespace than the current, edit and uncomment the \"namespace\" parameter in overlays/my-server/kustomization.yaml . Change the default fsGroup \u00b6 The default \"fsGroup\" is mapped to \"nobody\" (gid=65534) which allows remote Pods run as the root user to write in the NFS export. This may not be desirable as best practices dictate that Pods should run with a user id larger than 99. To allow user Pods to write in the export, edit overlays/my-server/deployment.yaml and change the \"fsGroup\" to the corresponding gid running in the remote Pod . apiVersion: apps/v1 kind: Deployment metadata: name: hpe-nfs spec: template: spec: securityContext: fsGroup: 65534 fsGroupChangePolicy: OnRootMismatch Deploy the NFS server by issuing kubectl apply -k overlays/my-server : configmap/my-server-hpe-nfs-conf created configmap/my-server-local-conf-97898bftbh created service/my-server-hpe-nfs created persistentvolumeclaim/my-server-hpe-nfs created deployment.apps/my-server-hpe-nfs created Inspect the resources with kubectl get -k overlays/my-server : NAME DATA AGE configmap/my-server-hpe-nfs-conf 1 59s configmap/my-server-local-conf-97898bftbh 2 59s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-server-hpe-nfs ClusterIP 10.100.200.11 49000/TCP,2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,111/TCP,111/UDP,662/TCP,662/UDP,875/TCP,875/UDP 59s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-server-hpe-nfs Bound pvc-ae943116-d0af-4696-8b1b-1dcf4316bdc2 18Gi RWO vsphere-sc 58s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/my-server-hpe-nfs 1/1 1 1 59s Make a note of the IP address assigned to \"service/my-server-hpe-nfs\", that is the IP address needed to mount the NFS export. Tip If the Kubernetes cluster DNS service is resolvable from the worker node host OS, it possible to use the cluster DNS address to mount the Service , in this example that would be \"my-server-hpe-nfs.default.svc.cluster.local\". Mounting the NFS Server \u00b6 There are two ways to mount the NFS server. Inline declaration of where to find the NFS server and NFS Export Statically creating a PersistentVolume with the NFS server details and mount options and manually claiming the PV with a PVC using the .spec.volumeName parameter Inline Declaration \u00b6 This is the most elegant solution as it does not require any intermediary PVC or PV and directly refers to the NFS server within a workload stanza. This is an example from a StatefulSet workload controller having multiple replicas. ... spec: replicas: 3 template: ... spec: containers: volumeMounts: - name: vol mountPath: /vol ... volumes: - name: vol nfs: server: 10.100.200.11 path: /export Important Replace .spec.template.spec.volumes[].nfs.server with IP address from the actual Service IP address and not the examples. Static Provisioning \u00b6 Refer to the official Kubernetes documentation for the built-in NFS client on how to perform static provisioning of NFS PVs and PVCs . Expand PVC \u00b6 If the StorageClass and underlying CSI driver supports volume expansion, simply edit overlays/my-server/environment.properties with the new (larger) size and issue kubectl apply -k overlays/my-server to expand the volume. Deleting the NFS Server \u00b6 Ensure no workloads have active mounts against the NFS server Service . If there are, those Pods will be stuck indefinitely. Run kubectl delete -k overlays/my-server : configmap \"my-server-hpe-nfs-conf\" deleted configmap \"my-server-local-conf-97898bftbh\" deleted service \"my-server-hpe-nfs\" deleted persistentvolumeclaim \"my-server-hpe-nfs\" deleted deployment.apps \"my-server-hpe-nfs\" deleted Caution Unless the StorageClass \"reclaimPolicy\" is set to \"Retain\". The underlying PV will be deleted from the cluster and data needs to be restored from backups if needed.","title":"Standalone NFS Server"},{"location":"csi_driver/standalone_nfs.html#standalone_nfs_server","text":"In certain situations is desirable to run the NFS Server Provisioner image without the dual PersistentVolumeClaim (PVC) semantic in a more static fashion on top of a PVC provisioned by a non-HPE CSI Driver StorageClass . Notice Since HPE CSI Driver for Kubernetes v2.4.1, this functionality is built into the CSI driver. See Using a Foreign StorageClass how to use it. Standalone NFS Server Limitations Prerequisites Create a Workspace Create an NFS Server environment.properties kustomization.yaml Change the default fsGroup Mounting the NFS Server Inline Declaration Static Provisioning Expand PVC Deleting the NFS Server","title":"Standalone NFS Server"},{"location":"csi_driver/standalone_nfs.html#limitations","text":"The standalone NFS server is not part of the HPE CSI Driver and should be considered a standalone Kubernetes application altogether. The HPE CSI Driver NFS Server Provisioner NFS servers may co-exist on the same cluster and Namespace without risk of conflict but not recommended. The Pod Monitor which normally monitors Pods status for the \"NodeLost\" condition is not included with the standalone NFS server and recovery is at the mercy of the underlying storage platform and driver. Support is limited on the standalone NFS server and only available to select users.","title":"Limitations"},{"location":"csi_driver/standalone_nfs.html#prerequisites","text":"It's assumed during the creation steps that a Kubernetes cluster is available with enough permissions to deploy privileged Pods with SYS_ADMIN and DAC_READ_SEARCH capabilities. All steps are run in a terminal with kubectl and git in in the path. A default StorageClass declared on the cluster Worker nodes that will serve the NFS exports must be labeled with csi.hpe.com/hpe-nfs: \"true\" kubectl and Kubernetes v1.21 or newer","title":"Prerequisites"},{"location":"csi_driver/standalone_nfs.html#create_a_workspace","text":"NFS server configurations are managed with the kustomize templating system. Clone this repository to get started and change working directory. git clone https://github.com/hpe-storage/scod cd scod/docs/csi_driver/examples/standalone_nfs In the current directory, various manifests and configuration directives exist to deploy and manage NFS servers. Run tree . in the current directory: . \u251c\u2500\u2500 base \u2502 \u251c\u2500\u2500 configmap.yaml \u2502 \u251c\u2500\u2500 deployment.yaml \u2502 \u251c\u2500\u2500 environment.properties \u2502 \u251c\u2500\u2500 kustomization.yaml \u2502 \u251c\u2500\u2500 pvc.yaml \u2502 \u251c\u2500\u2500 service.yaml \u2502 \u2514\u2500\u2500 values.yaml \u2514\u2500\u2500 overlays \u2514\u2500\u2500 example \u251c\u2500\u2500 deployment.yaml \u251c\u2500\u2500 environment.properties \u2514\u2500\u2500 kustomization.yaml 4 directories, 10 files Important The current directory is now the \"home\" for the remainder of this guide.","title":"Create a Workspace"},{"location":"csi_driver/standalone_nfs.html#create_an_nfs_server","text":"Copy the \"example\" overlay into a new directory. In the examples \"my-server\" is used. cp -a overlays/example overlays/my-server Edit both \"environment.properties\" and \"kustomization.yaml\" in the newly created overlay. Also pay attention to if the remote Pods mounting the NFS export are running as a non-root user, if that's the case, the group ID is needed of those Pods (customizable per NFS server).","title":"Create an NFS Server"},{"location":"csi_driver/standalone_nfs.html#environmentproperties","text":"# This is the domain associated with worker node (not inter-cluster DNS) CLUSTER_NODE_DOMAIN_NAME=my-domain.example.com # The size of the backend RWO claim PERSISTENCE_SIZE=16Gi # Default resource limits for the NFS server NFS_SERVER_CPU_LIMIT=1 NFS_SERVER_MEMORY_LIMIT=2Gi The \"CLUSTER_NODE_DOMAIN_NAME\" variable refers to the DNS domain name that the worker node is resolvable in, not the Kubernetes cluster DNS. The \"PERSISTENCE_SIZE\" is the backend PVC size expressed in the same format accepted by a PVC . Configuring resource limits are optional but recommended for high performance workloads.","title":"environment.properties"},{"location":"csi_driver/standalone_nfs.html#kustomizationyaml","text":"Change the resource prefix in \"kustomization.yaml\" either with an editor or sed : sed -i\"\" 's/example-/my-server-/g' overlays/my-server/kustomization.yaml Seealso If the NFS server needs to be deployed in a different Namespace than the current, edit and uncomment the \"namespace\" parameter in overlays/my-server/kustomization.yaml .","title":"kustomization.yaml"},{"location":"csi_driver/standalone_nfs.html#change_the_default_fsgroup","text":"The default \"fsGroup\" is mapped to \"nobody\" (gid=65534) which allows remote Pods run as the root user to write in the NFS export. This may not be desirable as best practices dictate that Pods should run with a user id larger than 99. To allow user Pods to write in the export, edit overlays/my-server/deployment.yaml and change the \"fsGroup\" to the corresponding gid running in the remote Pod . apiVersion: apps/v1 kind: Deployment metadata: name: hpe-nfs spec: template: spec: securityContext: fsGroup: 65534 fsGroupChangePolicy: OnRootMismatch Deploy the NFS server by issuing kubectl apply -k overlays/my-server : configmap/my-server-hpe-nfs-conf created configmap/my-server-local-conf-97898bftbh created service/my-server-hpe-nfs created persistentvolumeclaim/my-server-hpe-nfs created deployment.apps/my-server-hpe-nfs created Inspect the resources with kubectl get -k overlays/my-server : NAME DATA AGE configmap/my-server-hpe-nfs-conf 1 59s configmap/my-server-local-conf-97898bftbh 2 59s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-server-hpe-nfs ClusterIP 10.100.200.11 49000/TCP,2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,111/TCP,111/UDP,662/TCP,662/UDP,875/TCP,875/UDP 59s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-server-hpe-nfs Bound pvc-ae943116-d0af-4696-8b1b-1dcf4316bdc2 18Gi RWO vsphere-sc 58s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/my-server-hpe-nfs 1/1 1 1 59s Make a note of the IP address assigned to \"service/my-server-hpe-nfs\", that is the IP address needed to mount the NFS export. Tip If the Kubernetes cluster DNS service is resolvable from the worker node host OS, it possible to use the cluster DNS address to mount the Service , in this example that would be \"my-server-hpe-nfs.default.svc.cluster.local\".","title":"Change the default fsGroup"},{"location":"csi_driver/standalone_nfs.html#mounting_the_nfs_server","text":"There are two ways to mount the NFS server. Inline declaration of where to find the NFS server and NFS Export Statically creating a PersistentVolume with the NFS server details and mount options and manually claiming the PV with a PVC using the .spec.volumeName parameter","title":"Mounting the NFS Server"},{"location":"csi_driver/standalone_nfs.html#inline_declaration","text":"This is the most elegant solution as it does not require any intermediary PVC or PV and directly refers to the NFS server within a workload stanza. This is an example from a StatefulSet workload controller having multiple replicas. ... spec: replicas: 3 template: ... spec: containers: volumeMounts: - name: vol mountPath: /vol ... volumes: - name: vol nfs: server: 10.100.200.11 path: /export Important Replace .spec.template.spec.volumes[].nfs.server with IP address from the actual Service IP address and not the examples.","title":"Inline Declaration"},{"location":"csi_driver/standalone_nfs.html#static_provisioning","text":"Refer to the official Kubernetes documentation for the built-in NFS client on how to perform static provisioning of NFS PVs and PVCs .","title":"Static Provisioning"},{"location":"csi_driver/standalone_nfs.html#expand_pvc","text":"If the StorageClass and underlying CSI driver supports volume expansion, simply edit overlays/my-server/environment.properties with the new (larger) size and issue kubectl apply -k overlays/my-server to expand the volume.","title":"Expand PVC"},{"location":"csi_driver/standalone_nfs.html#deleting_the_nfs_server","text":"Ensure no workloads have active mounts against the NFS server Service . If there are, those Pods will be stuck indefinitely. Run kubectl delete -k overlays/my-server : configmap \"my-server-hpe-nfs-conf\" deleted configmap \"my-server-local-conf-97898bftbh\" deleted service \"my-server-hpe-nfs\" deleted persistentvolumeclaim \"my-server-hpe-nfs\" deleted deployment.apps \"my-server-hpe-nfs\" deleted Caution Unless the StorageClass \"reclaimPolicy\" is set to \"Retain\". The underlying PV will be deleted from the cluster and data needs to be restored from backups if needed.","title":"Deleting the NFS Server"},{"location":"csi_driver/using.html","text":"Overview \u00b6 At this point the CSI driver and CSP should be installed and configured. Important Most examples below assumes there's a Secret named \"hpe-backend\" in the \"hpe-storage\" Namespace . Learn how to add Secrets in the Deployment section . Overview PVC Access Modes Enabling CSI Snapshots Base StorageClass Parameters Enabling iSCSI CHAP Cluster-wide iSCSI CHAP Credentials Per StorageClass iSCSI CHAP Credentials Provisioning Concepts Create a PersistentVolumeClaim from a StorageClass Ephemeral Inline Volumes Raw Block Volumes Using CSI Snapshots Volume Groups Snapshot Groups Expanding PVCs Using PVC Overrides Using Volume Mutations Using the NFS Server Provisioner Using a Foreign StorageClass Example StorageClass using a foreign StorageClass Limitations and Considerations for the NFS Server Provisioner Using Volume Encryption Topology and volumeBindingMode Label Compute Nodes Create StorageClass with Topology Information Static Provisioning Further Reading Tip If you're familiar with the basic concepts of persistent storage on Kubernetes and are looking for an overview of example YAML declarations for different object types supported by the HPE CSI driver, visit the source code repo on GitHub. PVC Access Modes \u00b6 The HPE CSI Driver for Kubernetes is primarily a ReadWriteOnce (RWO) CSI implementation for block based storage. The CSI driver also supports ReadWriteMany (RWX) and ReadOnlyMany (ROX) using a NFS Server Provisioner. It's enabled by transparently deploying a NFS server for each Persistent Volume Claim (PVC) against a StorageClass where it's enabled, that in turn is backed by a traditional RWO claim. Most of the examples featured on SCOD are illustrated as RWO using block based storage, but many of the examples apply in the majority of use cases. Access Mode Abbreviation Use Case ReadWriteOnce RWO For high performance Pods where access to the PVC is exclusive to one host at a time. May use either block based storage or the NFS Server Provisioner where connectivity to the data fabric is limited to a few worker nodes in the Kubernetes cluster. ReadWriteOncePod RWOP Exclusive access by a single Pod . Not currently supported by the HPE CSI Driver. ReadWriteMany RWX For shared filesystems where multiple Pods in the same Namespace need simultaneous access to a PVC across multiple nodes. ReadOnlyMany ROX Read-only representation of RWX. ReadWriteOnce and access by multiple Pods Pods that require access to the same \"ReadWriteOnce\" (RWO) PVC needs to reside on the same node and Namespace by using selectors or affinity scheduling rules applied when deployed. If not configured correctly, the Pod will fail to start and will throw a \"Multi-Attach\" error in the event log if the PVC is already attached to a Pod that has been scheduled on a different node within the cluster. The NFS Server Provisioner is not enabled by the default StorageClass and needs a custom StorageClass . The following sections are tailored to help understand the NFS Server Provisioner capabilities. Using the NFS Server Provisioner NFS Server Provisioner StorageClass parameters Diagnosing the NFS Server Provisioner issues Limitations and Considerations for the NFS Server Provisioner Enabling CSI Snapshots \u00b6 Support for VolumeSnapshotClasses and VolumeSnapshots is available from Kubernetes 1.17+. The snapshot CRDs and the common snapshot controller needs to be installed manually. As per Kubernetes TAG Storage, these should not be installed as part of a CSI driver and should be deployed by the Kubernetes cluster vendor or user. Ensure the snapshot CRDs and common snapshot controller hasn't been installed already. kubectl get crd volumesnapshots.snapshot.storage.k8s.io \\ volumesnapshotcontents.snapshot.storage.k8s.io \\ volumesnapshotclasses.snapshot.storage.k8s.io Vendors may package, name and deploy the common snapshot controller using their own naming conventions. Run the command below and look for workload names that contain \"snapshot\". kubectl get sts,deploy -A If no prior CRDs or controllers exist, install the snapshot CRDs and common snapshot controller (once per Kubernetes cluster, independent of any CSI drivers). HPE CSI Driver v2.5.0 # Kubernetes 1.27-1.30 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v8.0.1 -b hpe-csi-driver-v2.5.0 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- v2.4.2 # Kubernetes 1.26-1.29 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v6.3.3 -b hpe-csi-driver-v2.4.2 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- v2.4.1 # Kubernetes 1.26-1.29 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v6.3.3 -b hpe-csi-driver-v2.4.1 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- v2.4.0 # Kubernetes 1.25-1.28 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v6.2.2 -b hpe-csi-driver-v2.4.0 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- v2.3.0 # Kubernetes 1.23-1.26 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v5.0.1 -b hpe-csi-driver-v2.3.0 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- Tip The provisioning section contains examples on how to create VolumeSnapshotClass and VolumeSnapshot objects. Base StorageClass Parameters \u00b6 Each CSP has its own set of unique parameters to control the provisioning behavior. These examples serve as a base StorageClass example for each version of Kubernetes. See the respective CSP for more elaborate examples. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\" name: hpe-standard provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by the HPE CSI Driver for Kubernetes\" reclaimPolicy: Delete allowVolumeExpansion: true Important Replace \"hpe-backend\" with a Secret relevant to the backend being referenced. Common HPE CSI Driver StorageClass parameters across CSPs. Parameter String Description accessProtocol Text The access protocol to use when accessing the persistent volume (\"fc\" or \"iscsi\"). Default: \"iscsi\" chapSecretName Text Name of Secret to use for iSCSI CHAP. chapSecretNamespace Text Namespace of Secret to use for iSCSI CHAP. description 1 Text Text to be added to the volume PV metadata on the backend CSP. Default: \"\" csi.storage.k8s.io/fstype Text Filesystem to format new volumes with. XFS is preferred, ext3, ext4 and btrfs is supported. Defaults to \"ext4\" if omitted. fsOwner userId:groupId The user id and group id that should own the root directory of the filesystem. fsMode Octal digits 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. fsCreateOptions Text A string to be passed to the mkfs command. These flags are opaque to CSI and are therefore not validated. To protect the node, only the following characters are allowed: [a-zA-Z0-9=, \\-] . fsRepair Boolean When set to \"true\", if a mount fails and filesystem corruption is detected, this parameter will control if an actual repair will be attempted. Default: \"false\". Note: fsRepair is unable to detect or remedy corrupted filesystems that are already mounted. Data loss may occur during the attempt to repair the filesystem. nfsResources Boolean When set to \"true\", requests against the StorageClass will create resources for the NFS Server Provisioner ( Deployment , RWO PVC and Service ). Required parameter for ReadWriteMany and ReadOnlyMany accessModes. Default: \"false\" nfsForeignStorageClass Text Provision NFS servers on PVCs from a different StorageClass . See Using a Foreign StorageClass nfsNamespace Text Resources are by default created in the \"hpe-nfs\" Namespace . If CSI VolumeSnapshotClass and dataSource functionality is required on the requesting claim, requesting and backing PVC need to exist in the requesting Namespace . A value of \"csi.storage.k8s.io/pvc/namespace\" will provision resources in the requesting PVC Namespace . nfsNodeSelector Text Customize the nodeSelector label value for the NFS Pod . The default behavior is to omit the nodeSelector . nfsMountOptions Text Customize NFS mount options for the Pods to the server Deployment . Uses mount command defaults from the node. nfsProvisionerImage Text Customize provisioner image for the server Deployment . Default: Official build from \"hpestorage/nfs-provisioner\" repo nfsResourceRequestsCpuM Text Specify CPU requests for the server Deployment in milli CPU. Default: \"500m\". Example: \"4000m\" nfsResourceRequestsMemoryMi Text Specify memory requests (in megabytes) for the server Deployment . Default: \"512Mi\". Example: \"4096Mi\". nfsResourceLimitsCpuM Text Specify CPU limits for the server Deployment in milli CPU. Default: \"1000m\". Example: \"4000m\" nfsResourceLimitsMemoryMi Text Specify memory limits (in megabytes) for the server Deployment . Default: \"2048Mi\". Example: \"500Mi\". Recommended minimum: \"2048Mi\". hostEncryption Boolean Direct the CSI driver to invoke Linux Unified Key Setup (LUKS) via the dm-crypt kernel module. Default: \"false\". See Volume encryption to learn more. hostEncryptionSecretName Text Name of the Secret to use for the volume encryption. Mandatory if \"hostEncryption\" is enabled. Default: \"\" hostEncryptionSecretNamespace Text Namespace where to find \"hostEncryptionSecretName\". Default: \"\" 1 = Parameter is mutable using the CSI Volume Mutator . Note All common HPE CSI Driver parameters are optional. Enabling iSCSI CHAP \u00b6 Familiarize yourself with the iSCSI CHAP Considerations before proceeding. This section describes how to enable iSCSI CHAP with HPE CSI Driver 2.5.0 and later. Create an iSCSI CHAP Secret . The referenced CHAP account does not need to exist on the storage backend, it will be created by the CSP if it doesn't exist. apiVersion: v1 kind: Secret metadata: name: my-chap-secret namespace: hpe-storage stringData: # Up to 64 characters including \\-:., must start with an alpha-numeric character. chapUser: \"my-chap-user\" # Between 12 to 16 alpha-numeric characters. chapPassword: \"my-chap-password\" Once the Secret has been created, there are two methods available to use it depending on the situation, cluster-wide or per StorageClass . Cluster-wide iSCSI CHAP Credentials \u00b6 The cluster-wide iSCSI CHAP credentials will be used by all iSCSI-based PersistentVolumes regardless of backend and StorageClass . The CHAP Secret is simply referenced during install of the HPE CSI Driver for Kubernetes Helm Chart. The Secret and Namespace needs to exist prior to install. Example: helm install my-hpe-csi-driver -n hpe-storage \\ hpe-storage/hpe-csi-driver \\ --set iscsi.chapSecretName=my-chap-secret Important Once a PersistentVolume has been provisioned with cluster-wide iSCSI CHAP credentials it's not possible to switch over to per StorageClass iSCSI CHAP credentials. If CSI driver 2.4.2 or earlier has been used, cluster-wide iSCSI CHAP credentials is the only way to provide the credentials for volumes provisioned with 2.4.2 or earlier. Per StorageClass iSCSI CHAP Credentials \u00b6 The CHAP Secret may be referenced in a StorageClass . apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\" name: hpe-standard provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by the HPE CSI Driver for Kubernetes\" chapSecretNamespace: hpe-storage chapSecretName: my-chap-secret reclaimPolicy: Delete allowVolumeExpansion: true Warning The iSCSI CHAP credentials are in reality per iSCSI Target. Do NOT create multiple StorageClasses referencing different CHAP Secrets with different credentials for the same backend. It will result in a data outage with conflicting sessions. Ensure the same Secret is referenced in all StorageClasses using a particular backend. Provisioning Concepts \u00b6 These instructions are provided as an example on how to use the HPE CSI Driver with a CSP supported by HPE. Create a PersistentVolumeClaim from a StorageClass Ephemeral inline volumes Raw Block Volumes Using CSI Snapshots Volume Groups Snapshot Groups Expanding PVCs Using PVC Overrides Using Volume Mutations Using Volume Encryption Using the NFS Server Provisioner Using volume encryption Topology and volumeBindingMode Static Provisioning New to Kubernetes? There's a basic tutorial of how dynamic provisioning of persistent storage on Kubernetes works in the Video Gallery . Create a PersistentVolumeClaim from a StorageClass \u00b6 The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- < paste the YAML > ^D (CTRL + D) To get started, create a StorageClass API object referencing the CSI driver Secret relevant to the backend. These examples are for Kubernetes 1.15+ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-scod provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by the HPE CSI Driver for Kubernetes\" accessProtocol: iscsi reclaimPolicy: Delete allowVolumeExpansion: true Create a PersistentVolumeClaim . This object declaration ensures a PersistentVolume is created and provisioned on your behalf, make sure to reference the correct .spec.storageClassName : apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-first-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 32Gi storageClassName: hpe-scod Note In most environments, there is a default StorageClass declared on the cluster. In such a scenario, the .spec.storageClassName can be omitted. The default StorageClass is controlled by an annotation: .metadata.annotations.storageclass.kubernetes.io/is-default-class set to either \"true\" or \"false\" . After the PersistentVolumeClaim has been declared, check that a new PersistentVolume is created based on your claim: kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE pvc-13336da3-7... 32Gi RWO Delete Bound default/my-first-pvc hpe-scod 3s The above output means that the HPE CSI Driver successfully provisioned a new volume. The volume is not attached to any node yet. It will only be attached to a node if a scheduled workload requests the PersistentVolumeClaim . Now, let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted according to the specification. kind: Pod apiVersion: v1 metadata: name: my-pod spec: containers: - name: pod-datelog-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data - name: pod-datelog-2 image: debian command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data volumes: - name: export1 persistentVolumeClaim: claimName: my-first-pvc Check if the Pod is running successfully. kubectl get pod my-pod NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 2m29s Tip A simple Pod does not provide any automatic recovery if the node the Pod is scheduled on crashes or become unresponsive. Please see the official Kubernetes documentation for different workload types that provide automatic recovery. A shortlist of recommended workload types that are suitable for persistent storage is available in this blog post and best practices are outlined in this blog post . Ephemeral Inline Volumes \u00b6 It's possible to declare a volume \"inline\" a Pod specification. The volume is ephemeral and only persists as long as the Pod is running. If the Pod gets rescheduled, deleted or upgraded, the volume is deleted and a new volume gets provisioned if it gets restarted. Ephemeral inline volumes are not associated with a StorageClass , hence a Secret needs to be provided inline with the volume. Warning Allowing user Pods to access the CSP Secret gives them the same privileges on the backend system as the HPE CSI Driver. There are two ways to declare the Secret with ephemeral inline volumes, either the Secret is in the same Namespace as the workload being declared or it resides in a foreign Namespace . Local Secret : apiVersion: v1 kind: Pod metadata: name: my-pod-inline-mount-1 spec: containers: - name: pod-datelog-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: my-volume-1 mountPath: /data volumes: - name: my-volume-1 csi: driver: csi.hpe.com nodePublishSecretRef: name: hpe-backend fsType: ext3 volumeAttributes: csi.storage.k8s.io/ephemeral: \"true\" accessProtocol: \"iscsi\" size: \"5Gi\" Foreign Secret : apiVersion: v1 kind: Pod metadata: name: my-pod-inline-mount-2 spec: containers: - name: pod-datelog-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: my-volume-1 mountPath: /data volumes: - name: my-volume-1 csi: driver: csi.hpe.com fsType: ext3 volumeAttributes: csi.storage.k8s.io/ephemeral: \"true\" inline-volume-secret-name: hpe-backend inline-volume-secret-namespace: hpe-storage accessProtocol: \"iscsi\" size: \"7Gi\" The parameters used in the examples are the bare minimum required parameters. Any parameters supported by the HPE CSI Driver and backend CSP may be used for ephemeral inline volumes. See the base StorageClass parameters or the respective CSP being used. Seealso For more elaborate use cases around ephemeral inline volumes, check out the tutorial on HPE Developer: Using Ephemeral Inline Volumes on Kubernetes Raw Block Volumes \u00b6 The default volumeMode for a PersistentVolumeClaim is Filesystem . If a raw block volume is desired, volumeMode needs to be set to Block . No filesystem will be created. Example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-block spec: accessModes: - ReadWriteOnce resources: requests: storage: 32Gi storageClassName: hpe-scod volumeMode: Block Note The accessModes may be set to ReadWriteOnce , ReadWriteMany or ReadOnlyMany . It's expected that the application handles read/write IO, volume locking and access in the event of concurrent block access from multiple nodes. Consult the Alletra 6000 CSP documentation if using ReadWriteMany raw block volumes with FC on Nimble, Alletra 5000 or 6000. Mapping the device in a Pod specification is slightly different than using regular filesystems as a volumeDevices section is added instead of a volumeMounts stanza: apiVersion: v1 kind: Pod metadata: name: my-pod-block spec: containers: - name: my-null-pod image: fedora:31 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: - name: data devicePath: /dev/xvda volumes: - name: data persistentVolumeClaim: claimName: my-pvc-block Seealso There's an in-depth tutorial available on HPE Developer that covers raw block volumes: Using Raw Block Volumes on Kubernetes Using CSI Snapshots \u00b6 CSI introduces snapshots as native objects in Kubernetes that allows end-users to provision VolumeSnapshot objects from an existing PersistentVolumeClaim . New PVCs may then be created using the snapshot as a source. Tip Ensure CSI snapshots are enabled . There's a tutorial in the Video Gallery on how to use CSI snapshots and clones. Start by creating a VolumeSnapshotClass referencing the Secret and defining additional snapshot parameters. apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: hpe-snapshot annotations: snapshot.storage.kubernetes.io/is-default-class: \"true\" driver: csi.hpe.com deletionPolicy: Delete parameters: description: \"Snapshot created by the HPE CSI Driver\" csi.storage.k8s.io/snapshotter-secret-name: hpe-backend csi.storage.k8s.io/snapshotter-secret-namespace: hpe-storage csi.storage.k8s.io/snapshotter-list-secret-name: hpe-backend csi.storage.k8s.io/snapshotter-list-secret-namespace: hpe-storage Note Container Storage Providers may have optional parameters to the VolumeSnapshotClass . Create a VolumeSnapshot . This will create a new snapshot of the volume. apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-snapshot spec: source: persistentVolumeClaimName: my-pvc Tip If a specific VolumeSnapshotClass is desired, use .spec.volumeSnapshotClassName to call it out. Check that a new VolumeSnapshot is created based on your claim: kubectl describe volumesnapshot my-snapshot Name: my-snapshot Namespace: default ... Status: Creation Time: 2019-05-22T15:51:28Z Ready: true Restore Size: 32Gi It's now possible to create a new PersistentVolumeClaim from the VolumeSnapshot . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-from-snapshot spec: dataSource: name: my-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 32Gi Important The size in .spec.resources.requests.storage must match the .spec.dataSource size. The .data.dataSource attribute may also clone PersistentVolumeClaim directly, without creating a VolumeSnapshot . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-from-pvc spec: dataSource: name: my-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 32Gi Again, the size in .spec.resources.requests.storage must match the source PersistentVolumeClaim . This can get sticky from an automation perspective as volume expansion is being used on the source volume. It's recommended to inspect the source PersistentVolumeClaim or VolumeSnapshot size prior to creating a clone. Learn more For a more comprehensive tutorial on how to use snapshots and clones with CSI on Kubernetes 1.17, see HPE CSI Driver for Kubernetes: Snapshots, Clones and Volume Expansion on HPE Developer. Volume Groups \u00b6 PersistentVolumeClaims created in a particular Namespace from the same storage backend may be grouped together in a VolumeGroup . A VolumeGroup is what may be known as a \"consistency group\" in other storage infrastructure systems. This allows certain attributes to be managed on a abstract group and attributes then applies to all member volumes in the group instead of managing each volume individually. One such aspect is creating snapshots with referential integrity between volumes or setting a performance attribute that would have accounting made on the logical group rather than the individual volume. Tip A tutorial on how to use VolumeGroups and SnapshotGroups is available in the Video Gallery . Before grouping PeristentVolumeClaims there needs to be a VolumeGroupClass created. It needs to reference a Secret that corresponds to the same backend the PersistentVolumeClaims were created on. A VolumeGroupClass is a cluster resource that needs administrative privileges to create. apiVersion: storage.hpe.com/v1 kind: VolumeGroupClass metadata: name: my-volume-group-class provisioner: csi.hpe.com deletionPolicy: Delete parameters: description: \"HPE CSI Driver for Kubernetes Volume Group\" csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage Note The VolumeGroupClass .parameters may contain CSP specifc parameters. Check the documentation of the Container Storage Provider being used. Once the VolumeGroupClass is in place, users may create VolumeGroups . The VolumeGroups are just like PersistentVolumeClaims part of a Namespace and both resources need to be in the same Namespace for the grouping to be successful. apiVersion: storage.hpe.com/v1 kind: VolumeGroup metadata: name: my-volume-group spec: volumeGroupClassName: my-volume-group-class Depending on the CSP being used, the VolumeGroup may reference an object that corresponds to the Kubernetes API object. It's not until users annotates their PersistentVolumeClaims the VolumeGroup gets populated. Adding a PersistentVolumeClaim to a VolumeGroup : kubectl annotate pvc/my-pvc csi.hpe.com/volume-group=my-volume-group Removing a PersistentVolumeClaim from a VolumeGroup : kubectl annotate pvc/my-pvc csi.hpe.com/volume-group- Tip While adding the PersistentVolumeClaim to the VolumeGroup is instant, removal require one reconciliation loop and might not immediately be reflected on the VolumeGroup object. Snapshot Groups \u00b6 Being able to create snapshots of the VolumeGroup require the CSI external-snapshotter to be installed and also require a VolumeSnapshotClass configured using the same storage backend as the VolumeGroup . Once those pieces are in place, a SnapshotGroupClass needs to be created. SnapshotGroupClasses are cluster objects created by an administrator. apiVersion: storage.hpe.com/v1 kind: SnapshotGroupClass metadata: name: my-snapshot-group-class snapshotter: csi.hpe.com deletionPolicy: Delete parameters: csi.hpe.com/snapshot-group-snapshotter-secret-name: hpe-backend csi.hpe.com/snapshot-group-snapshotter-secret-namespace: hpe-storage Creating a SnapshotGroup is later performed using the VolumeGroup as a source while referencing a SnapshotGroupClass and a VolumeSnapshotClass . apiVersion: storage.hpe.com/v1 kind: SnapshotGroup metadata: name: my-snapshot-group-1 spec: source: kind: VolumeGroup apiGroup: storage.hpe.com name: my-volume-group snapshotGroupClassName: my-snapshot-group-class volumeSnapshotClassName: hpe-snapshot Once the SnapshotGroup has been successfully created, the individual VolumeSnapshots are now available in the Namespace . List VolumeSnapshots : kubectl get volumesnapshots If no VolumeSnapshots are being enumerated, check the diagnostics on how to check the component logs and such. New feature! Volume Groups and Snapshot Groups got introduced in HPE CSI Driver for Kubernetes 1.4.0. Expanding PVCs \u00b6 To perform expansion operations on Kubernetes 1.14+, you must enhance your StorageClass with the .allowVolumeExpansion: true key. Please see base StorageClass parameters for additional information. Then, a volume provisioned by a StorageClass with expansion attributes may have its PersistentVolumeClaim expanded by altering the .spec.resources.requests.storage key of the PersistentVolumeClaim . This may be done by the kubectl patch command. kubectl patch pvc/my-pvc --patch '{\"spec\": {\"resources\": {\"requests\": {\"storage\": \"64Gi\"}}}}' persistentvolumeclaim/my-pvc patched The new PersistentVolumeClaim size may be observed with kubectl get pvc/my-pvc after a few moments. Using PVC Overrides \u00b6 The HPE CSI Driver allows the PersistentVolumeClaim to override the StorageClass parameters by annotating the PersistentVolumeClaim . Define the parameters allowed to be overridden in the StorageClass by setting the allowOverrides parameter: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-scod-override provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage description: \"Volume provisioned by the HPE CSI Driver\" accessProtocol: iscsi allowOverrides: description,accessProtocol The end-user may now control those parameters (the StorageClass provides the default values). apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-override annotations: csi.hpe.com/description: \"This is my custom description\" csi.hpe.com/accessProtocol: fc spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: hpe-scod-override Using Volume Mutations \u00b6 The HPE CSI Driver (version 1.3.0 and later) allows the CSP backend volume to be mutated by annotating the PersistentVolumeClaim . Define the parameters allowed to be mutated in the StorageClass by setting the allowMutations parameter. Tip There's a tutorial available on YouTube accessible through the Video Gallery on how to use volume mutations to adapt stateful workloads with the HPE CSI Driver. Important In order to mutate a StorageClass parameter it needs to have a default value set in the StorageClass . In the example below we'll allow mutatating \"description\". If the parameter \"description\" wasn't set when the PersistentVolume was provisioned, no subsequent mutations are allowed. The CSP may set defaults for certain parameters during provisioning, if those are mutable, the mutation will be performed. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-scod-mutation provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage description: \"Volume provisioned by the HPE CSI Driver\" allowMutations: description Note The allowMutations parameter is a comma separated list of values defined by each of the CSPs parameters, except the description parameter, which is common across all CSPs. See the documentation for each CSP on what parameters are mutable. The end-user may now control those parameters by editing or patching the PersistentVolumeClaim . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-mutation annotations: csi.hpe.com/description: \"My description needs to change\" spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: hpe-scod-mutation Good to know As the .spec.csi.volumeAttributes on the PersistentVolume are immutable, the mutations performed on the backend volume are also annotated on the PersistentVolume object. Using the NFS Server Provisioner \u00b6 Enabling the NFS Server Provisioner to allow \"ReadWriteMany\" and \"ReadOnlyMany\" access mode for a PVC is straightforward. Create a new StorageClass and set .parameters.nfsResources to \"true\" . Any subsequent claim to the StorageClass will create a NFS server Deployment on the cluster with the associated objects running on top of a \"ReadWriteOnce\" PVC . Any \"RWO\" claim made against the StorageClass will also create a NFS server Deployment . This allows diverse connectivity options among the Kubernetes worker nodes as the HPE CSI Driver will look for nodes labelled csi.hpe.com/hpe-nfs=true (or using a custom value specified in .parameters.nfsNodeSelector ) before submitting the workload for scheduling. This allows dedicated NFS worker nodes without user workloads using taints and tolerations. The NFS server Pod is armed with a csi.hpe.com/hpe-nfs toleration. It's required to taint dedicated NFS worker nodes if they truly need to be dedicated. By default, the NFS Server Provisioner deploy resources in the \"hpe-nfs\" Namespace . This makes it easy to manage and diagnose. However, to use CSI data management capabilities ( VolumeSnapshots and .spec.dataSource ) on the PVCs, the NFS resources need to be deployed in the same Namespace as the \"RWX\"/\"ROX\" requesting PVC . This is controlled by the nfsNamespace StorageClass parameter. See base StorageClass parameters for more information. Tip A comprehensive tutorial is available on HPE Developer on how to get started with the NFS Server Provisioner and the HPE CSI Driver for Kubernetes. There's also a brief tutorial available in the Video Gallery . Example StorageClass with \"nfsResources\" enabled. No CSP specific parameters for clarity. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-standard-file provisioner: csi.hpe.com parameters: csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"NFS backend volume created by the HPE CSI Driver for Kubernetes\" csi.storage.k8s.io/fstype: ext4 nfsResources: \"true\" reclaimPolicy: Delete allowVolumeExpansion: true Note Using XFS may result in stale NFS handles during node failures and outages. Always use ext4 for NFS PVCs . While \"allowVolumeExpansion\" isn't supported on the NFS PVC , the backend \"RWO\" PVC does. Example use of accessModes : ReadWriteOnce apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-rwo-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 32Gi storageClassName: hpe-nfs ReadWriteMany apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-rwx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 32Gi storageClassName: hpe-nfs ReadOnlyMany apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-rox-pvc spec: accessModes: - ReadOnlyMany resources: requests: storage: 32Gi storageClassName: hpe-nfs In the case of declaring a \"ROX\" PVC , the requesting Pod specification needs to request the PVC as read-only. Example: apiVersion: v1 kind: Pod metadata: name: pod-rox spec: containers: - image: busybox name: busybox command: - \"sleep\" - \"300\" volumeMounts: - mountPath: /data name: my-vol readOnly: true volumes: - name: my-vol persistentVolumeClaim: claimName: my-rox-pvc readOnly: true Requesting an empty read-only volume might not seem practical. The primary use case is to source existing datasets into immutable applications, using either a backend CSP cloning capability or CSI data management feature such as snapshots or existing PVCs . Using a Foreign StorageClass \u00b6 Since HPE CSI Driver for Kubernetes version 2.4.1 it's possible to provision NFS servers on top of non-HPE CSI Driver StorageClasses . The most prominent use case for this functionality is to coexist with the vSphere CSI Driver (VMware vSphere Container Storage Plug-in) in FC environments and provide \"RWX\" PVCs . Example StorageClass using a foreign StorageClass \u00b6 The HPE CSI Driver only manages the NFS server Deployment , Service and PVC . There must be an existing StorageClass capable of provisioning \"RWO\" filesystem PVCs . apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-nfs-servers provisioner: csi.hpe.com parameters: nfsResources: \"true\" nfsForeignStorageClass: \"my-foreign-storageclass-name\" reclaimPolicy: Delete allowVolumeExpansion: false Next, provision \"RWO\" or \"RWX\" claims from the \"hpe-nfs-servers\" StorageClass . An NFS server will be provisioned on a \"RWO\" PVC from the StorageClass \"my-foreign-storageclass-name\". Note Only StorageClasses that uses HPE storage proxied by partner CSI drivers are supported by HPE. Limitations and Considerations for the NFS Server Provisioner \u00b6 These are some common issues and gotchas that are useful to know about when planning to use the NFS Server Provisioner. The current tested and supported limit for the NFS Server Provisioner is 32 NFS servers per Kubernetes worker node. The two StorageClass parameters \"nfsResourceLimitsCpuM\" and \"nfsResourceLimitsMemoryMi\" control how much CPU and memory it may consume. Tests show that the NFS server consumes about 150MiB at instantiation and 2GiB is the recommended minimum for most workloads. The NFS server Pod is by default limited to 2GiB of memory and 1000 milli CPU. The NFS PVC can NOT be expanded. If more capacity is needed, expand the \"ReadWriteOnce\" PVC backing the NFS Server Provisioner. This will result in inaccurate space reporting. Due to the fact that the NFS Server Provisioner deploys a number of different resources on the hosting cluster per PVC , provisioning times may differ greatly between clusters. On an idle cluster with the NFS Server Provisioning image cached, less than 30 seconds is the most common sighting but it may exceed 30 seconds which may trigger warnings on the requesting PVC . This is normal behavior. The HPE CSI Driver includes a Pod Monitor to delete Pods that have become unavailable due to the Pod status changing to NodeLost or a node becoming unreachable that the Pod runs on. By default the Pod Monitor only watches the NFS Server Provisioner Deployments . It may be used for any Deployment . See Pod Monitor on how to use it, especially the limitations . Certain CNIs may have issues to gracefully restore access from the NFS clients to the NFS export. Flannel have exhibited this problem and the most consistent performance have been observed with Calico. The Volume Mutation feature does not work on the NFS PVC . If changes are needed, perform the change on the backing \"ReadWriteOnce\" PVC . As outlined in Using the NFS Server Provisioner , CSI snapshots and cloning of NFS PVCs requires the CSI snapshot and NFS server to reside in the same Namespace . This also applies when using third-party backup software such as Kasten K10. Use the \"nfsNamespace\" StorageClass parameter to control where to provision resources. VolumeGroups and SnapshotGroups are only supported on the backing \"ReadWriteOnce\" PVC . The \"volume-group\" annotation may be set at the initial creation of the NFS PVC but will have adverse effect on logging as the Volume Group Provisioner tries to add the NFS PVC to the backend consistency group indefinitely. The NFS servers deployed by the HPE CSI Driver are not managed during CSI driver upgrades. Manual upgrade is required . Using the same network interface for NFS and block IO has shown suboptimal performance. Use FC for the block storage for the best performance. A single NFS server instance is capable of 100GigE wirespeed with large sequential workloads and up to 200,000 IOPS with small IO using bare-metal nodes and multiple clients. Using ext4 as the backing filesystem has shown better performance with simultaneous writers to the same file. Additional configuration and considerations may be required when using the NFS Server Provisioner with Red Hat OpenShift. See NFS Server Provisioner Considerations for OpenShift. XFS has proven troublesome to use as a backend \"RWO\" volume filesystem, leaving stale NFS handles for clients. Use ext4 as the \"csi.storage.k8s.io/fstype\" StorageClass parameter for best results. The NFS servers provide a \"ClusterIP\" Service . It is possible to expose the NFS servers outside the cluster for external NFS clients. Understand the scope and limitations in Auxillary Operations . See diagnosing NFS Server Provisioner issues for further details. Using Volume Encryption \u00b6 From version 2.0.0 and onwards of the CSI driver supports host-based volume encryption for any of the CSPs supported by the CSI driver. Host-based volume encryption is controlled by StorageClass parameters configured by the Kubernetes administrator and may be configured to be overridden by Kubernetes users. In the below example, a single Secret is used to encrypt and decrypt all volumes provisioned by the StorageClass . First, create a Secret , in this example we'll use the \"hpe-storage\" Namespace . apiVersion: v1 kind: Secret metadata: name: my-passphrase namespace: hpe-storage stringData: hostEncryptionPassphrase: \"HPE CSI Driver for Kubernetes 2.0.0 Rocks!\" Tip The \"hostEncryptionPassphrase\" can be up to 512 characters. Next, incorporate the Secret into a StorageClass . apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-encrypted provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage description: \"Volume provisioned by the HPE CSI Driver\" hostEncryption: \"true\" hostEncryptionSecretName: my-passphrase hostEncryptionSecretNamespace: hpe-storage reclaimPolicy: Delete allowVolumeExpansion: true Next, create a PersistentVolumeClaim that uses the \"hpe-encrypted\" StorageClass : apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-encrypted-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: hpe-encrypted Attach a basic Pod to verify functionality. kind: Pod apiVersion: v1 metadata: name: my-pod spec: containers: - name: pod-datelog-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data - name: pod-datelog-2 image: debian command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data volumes: - name: export1 persistentVolumeClaim: claimName: my-encrypted-pvc Once the Pod comes up, verify that the volume is encrypted. $ kubectl exec -it my-pod -c pod-datelog-1 -- df -h /data Filesystem Size Used Avail Use% Mounted on /dev/mapper/enc-mpatha 100G 33M 100G 1% /data Host-based volume encryption is in effect if the \"enc\" prefix is seen on the multipath device name. Seealso For an in-depth tutorial and more advanced use cases for host-based volume encryption, check out this blog post on HPE Developer: Host-based Volume Encryption with HPE CSI Driver for Kubernetes Topology and volumeBindingMode \u00b6 With CSI driver v2.5.0 and newer, basic CSI topology information can be associated with a single backend from a StorageClass . For backwards compatibility, only volumeBindingMode: WaitForFirstConsumer require topology labels assigned to compute nodes. Using the default volumeBindingMode of Immediate will preserve the behavior prior to v2.5.0. Tip The \"csi-provisioner\" is deployed with --feature-gates Topology=true and --immediate-topology=false . It's impact on volume provisioning and accessibility can be found here . Assume a simple use case where only a handful of nodes in a Kubernetes cluster have Fibre Channel adapters installed. Workloads with persistent storage requirements from a particular StorageClass should be deployed onto those nodes only. Label Compute Nodes \u00b6 Nodes with the label csi.hpe.com/zone are considered during topology accessibility assessments. Assume three nodes in the cluster have FC adapters. kubectl label node/my-node{1..3} csi.hpe.com/zone=fc --overwrite If the CSI driver is already installed on the cluster, the CSI node driver needs to be restarted for the node labels to propagate. kubectl rollout restart -n hpe-storage ds/hpe-csi-node Create StorageClass with Topology Information \u00b6 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\" name: hpe-standard-fc provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by the HPE CSI Driver for Kubernetes\" accessProtocol: fc reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer allowedTopologies: - matchLabelExpressions: - key: csi.hpe.com/zone values: - fc Any workload provisioning PVCs from the above StorageClass will now be scheduled on nodes labeled csi.hpe.com/zone=fc . Note The allowedTopologies key may be omitted if there's only a single topology applied to a subset of nodes. The nodes always need to be labeled when using volumeBindingMode: WaitForFirstConsumer . If all nodes have access to a backend, set volumeBindingMode: Immediate and omit allowedTopologies . Static Provisioning \u00b6 How to map an existing backend volume to a PersistentVolume differs between the CSP implementations. HPE Alletra 5000/6000 and Nimble Storage HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Further Reading \u00b6 The official Kubernetes documentation contains comprehensive documentation on how to markup PersistentVolumeClaim and StorageClass API objects to tweak certain behaviors. Each CSP has a set of unique StorageClass parameters that may be tweaked to accommodate a wide variety of use cases. Please see the documentation of the respective CSP for more details .","title":"Using"},{"location":"csi_driver/using.html#overview","text":"At this point the CSI driver and CSP should be installed and configured. Important Most examples below assumes there's a Secret named \"hpe-backend\" in the \"hpe-storage\" Namespace . Learn how to add Secrets in the Deployment section . Overview PVC Access Modes Enabling CSI Snapshots Base StorageClass Parameters Enabling iSCSI CHAP Cluster-wide iSCSI CHAP Credentials Per StorageClass iSCSI CHAP Credentials Provisioning Concepts Create a PersistentVolumeClaim from a StorageClass Ephemeral Inline Volumes Raw Block Volumes Using CSI Snapshots Volume Groups Snapshot Groups Expanding PVCs Using PVC Overrides Using Volume Mutations Using the NFS Server Provisioner Using a Foreign StorageClass Example StorageClass using a foreign StorageClass Limitations and Considerations for the NFS Server Provisioner Using Volume Encryption Topology and volumeBindingMode Label Compute Nodes Create StorageClass with Topology Information Static Provisioning Further Reading Tip If you're familiar with the basic concepts of persistent storage on Kubernetes and are looking for an overview of example YAML declarations for different object types supported by the HPE CSI driver, visit the source code repo on GitHub.","title":"Overview"},{"location":"csi_driver/using.html#pvc_access_modes","text":"The HPE CSI Driver for Kubernetes is primarily a ReadWriteOnce (RWO) CSI implementation for block based storage. The CSI driver also supports ReadWriteMany (RWX) and ReadOnlyMany (ROX) using a NFS Server Provisioner. It's enabled by transparently deploying a NFS server for each Persistent Volume Claim (PVC) against a StorageClass where it's enabled, that in turn is backed by a traditional RWO claim. Most of the examples featured on SCOD are illustrated as RWO using block based storage, but many of the examples apply in the majority of use cases. Access Mode Abbreviation Use Case ReadWriteOnce RWO For high performance Pods where access to the PVC is exclusive to one host at a time. May use either block based storage or the NFS Server Provisioner where connectivity to the data fabric is limited to a few worker nodes in the Kubernetes cluster. ReadWriteOncePod RWOP Exclusive access by a single Pod . Not currently supported by the HPE CSI Driver. ReadWriteMany RWX For shared filesystems where multiple Pods in the same Namespace need simultaneous access to a PVC across multiple nodes. ReadOnlyMany ROX Read-only representation of RWX. ReadWriteOnce and access by multiple Pods Pods that require access to the same \"ReadWriteOnce\" (RWO) PVC needs to reside on the same node and Namespace by using selectors or affinity scheduling rules applied when deployed. If not configured correctly, the Pod will fail to start and will throw a \"Multi-Attach\" error in the event log if the PVC is already attached to a Pod that has been scheduled on a different node within the cluster. The NFS Server Provisioner is not enabled by the default StorageClass and needs a custom StorageClass . The following sections are tailored to help understand the NFS Server Provisioner capabilities. Using the NFS Server Provisioner NFS Server Provisioner StorageClass parameters Diagnosing the NFS Server Provisioner issues Limitations and Considerations for the NFS Server Provisioner","title":"PVC Access Modes"},{"location":"csi_driver/using.html#enabling_csi_snapshots","text":"Support for VolumeSnapshotClasses and VolumeSnapshots is available from Kubernetes 1.17+. The snapshot CRDs and the common snapshot controller needs to be installed manually. As per Kubernetes TAG Storage, these should not be installed as part of a CSI driver and should be deployed by the Kubernetes cluster vendor or user. Ensure the snapshot CRDs and common snapshot controller hasn't been installed already. kubectl get crd volumesnapshots.snapshot.storage.k8s.io \\ volumesnapshotcontents.snapshot.storage.k8s.io \\ volumesnapshotclasses.snapshot.storage.k8s.io Vendors may package, name and deploy the common snapshot controller using their own naming conventions. Run the command below and look for workload names that contain \"snapshot\". kubectl get sts,deploy -A If no prior CRDs or controllers exist, install the snapshot CRDs and common snapshot controller (once per Kubernetes cluster, independent of any CSI drivers). HPE CSI Driver v2.5.0 # Kubernetes 1.27-1.30 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v8.0.1 -b hpe-csi-driver-v2.5.0 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- v2.4.2 # Kubernetes 1.26-1.29 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v6.3.3 -b hpe-csi-driver-v2.4.2 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- v2.4.1 # Kubernetes 1.26-1.29 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v6.3.3 -b hpe-csi-driver-v2.4.1 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- v2.4.0 # Kubernetes 1.25-1.28 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v6.2.2 -b hpe-csi-driver-v2.4.0 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- v2.3.0 # Kubernetes 1.23-1.26 git clone https://github.com/kubernetes-csi/external-snapshotter cd external-snapshotter git checkout tags/v5.0.1 -b hpe-csi-driver-v2.3.0 kubectl kustomize client/config/crd | kubectl create -f- kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f- Tip The provisioning section contains examples on how to create VolumeSnapshotClass and VolumeSnapshot objects.","title":"Enabling CSI Snapshots"},{"location":"csi_driver/using.html#base_storageclass_parameters","text":"Each CSP has its own set of unique parameters to control the provisioning behavior. These examples serve as a base StorageClass example for each version of Kubernetes. See the respective CSP for more elaborate examples. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\" name: hpe-standard provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by the HPE CSI Driver for Kubernetes\" reclaimPolicy: Delete allowVolumeExpansion: true Important Replace \"hpe-backend\" with a Secret relevant to the backend being referenced. Common HPE CSI Driver StorageClass parameters across CSPs. Parameter String Description accessProtocol Text The access protocol to use when accessing the persistent volume (\"fc\" or \"iscsi\"). Default: \"iscsi\" chapSecretName Text Name of Secret to use for iSCSI CHAP. chapSecretNamespace Text Namespace of Secret to use for iSCSI CHAP. description 1 Text Text to be added to the volume PV metadata on the backend CSP. Default: \"\" csi.storage.k8s.io/fstype Text Filesystem to format new volumes with. XFS is preferred, ext3, ext4 and btrfs is supported. Defaults to \"ext4\" if omitted. fsOwner userId:groupId The user id and group id that should own the root directory of the filesystem. fsMode Octal digits 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. fsCreateOptions Text A string to be passed to the mkfs command. These flags are opaque to CSI and are therefore not validated. To protect the node, only the following characters are allowed: [a-zA-Z0-9=, \\-] . fsRepair Boolean When set to \"true\", if a mount fails and filesystem corruption is detected, this parameter will control if an actual repair will be attempted. Default: \"false\". Note: fsRepair is unable to detect or remedy corrupted filesystems that are already mounted. Data loss may occur during the attempt to repair the filesystem. nfsResources Boolean When set to \"true\", requests against the StorageClass will create resources for the NFS Server Provisioner ( Deployment , RWO PVC and Service ). Required parameter for ReadWriteMany and ReadOnlyMany accessModes. Default: \"false\" nfsForeignStorageClass Text Provision NFS servers on PVCs from a different StorageClass . See Using a Foreign StorageClass nfsNamespace Text Resources are by default created in the \"hpe-nfs\" Namespace . If CSI VolumeSnapshotClass and dataSource functionality is required on the requesting claim, requesting and backing PVC need to exist in the requesting Namespace . A value of \"csi.storage.k8s.io/pvc/namespace\" will provision resources in the requesting PVC Namespace . nfsNodeSelector Text Customize the nodeSelector label value for the NFS Pod . The default behavior is to omit the nodeSelector . nfsMountOptions Text Customize NFS mount options for the Pods to the server Deployment . Uses mount command defaults from the node. nfsProvisionerImage Text Customize provisioner image for the server Deployment . Default: Official build from \"hpestorage/nfs-provisioner\" repo nfsResourceRequestsCpuM Text Specify CPU requests for the server Deployment in milli CPU. Default: \"500m\". Example: \"4000m\" nfsResourceRequestsMemoryMi Text Specify memory requests (in megabytes) for the server Deployment . Default: \"512Mi\". Example: \"4096Mi\". nfsResourceLimitsCpuM Text Specify CPU limits for the server Deployment in milli CPU. Default: \"1000m\". Example: \"4000m\" nfsResourceLimitsMemoryMi Text Specify memory limits (in megabytes) for the server Deployment . Default: \"2048Mi\". Example: \"500Mi\". Recommended minimum: \"2048Mi\". hostEncryption Boolean Direct the CSI driver to invoke Linux Unified Key Setup (LUKS) via the dm-crypt kernel module. Default: \"false\". See Volume encryption to learn more. hostEncryptionSecretName Text Name of the Secret to use for the volume encryption. Mandatory if \"hostEncryption\" is enabled. Default: \"\" hostEncryptionSecretNamespace Text Namespace where to find \"hostEncryptionSecretName\". Default: \"\" 1 = Parameter is mutable using the CSI Volume Mutator . Note All common HPE CSI Driver parameters are optional.","title":"Base StorageClass Parameters"},{"location":"csi_driver/using.html#enabling_iscsi_chap","text":"Familiarize yourself with the iSCSI CHAP Considerations before proceeding. This section describes how to enable iSCSI CHAP with HPE CSI Driver 2.5.0 and later. Create an iSCSI CHAP Secret . The referenced CHAP account does not need to exist on the storage backend, it will be created by the CSP if it doesn't exist. apiVersion: v1 kind: Secret metadata: name: my-chap-secret namespace: hpe-storage stringData: # Up to 64 characters including \\-:., must start with an alpha-numeric character. chapUser: \"my-chap-user\" # Between 12 to 16 alpha-numeric characters. chapPassword: \"my-chap-password\" Once the Secret has been created, there are two methods available to use it depending on the situation, cluster-wide or per StorageClass .","title":"Enabling iSCSI CHAP"},{"location":"csi_driver/using.html#cluster-wide_iscsi_chap_credentials","text":"The cluster-wide iSCSI CHAP credentials will be used by all iSCSI-based PersistentVolumes regardless of backend and StorageClass . The CHAP Secret is simply referenced during install of the HPE CSI Driver for Kubernetes Helm Chart. The Secret and Namespace needs to exist prior to install. Example: helm install my-hpe-csi-driver -n hpe-storage \\ hpe-storage/hpe-csi-driver \\ --set iscsi.chapSecretName=my-chap-secret Important Once a PersistentVolume has been provisioned with cluster-wide iSCSI CHAP credentials it's not possible to switch over to per StorageClass iSCSI CHAP credentials. If CSI driver 2.4.2 or earlier has been used, cluster-wide iSCSI CHAP credentials is the only way to provide the credentials for volumes provisioned with 2.4.2 or earlier.","title":"Cluster-wide iSCSI CHAP Credentials"},{"location":"csi_driver/using.html#per_storageclass_iscsi_chap_credentials","text":"The CHAP Secret may be referenced in a StorageClass . apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\" name: hpe-standard provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by the HPE CSI Driver for Kubernetes\" chapSecretNamespace: hpe-storage chapSecretName: my-chap-secret reclaimPolicy: Delete allowVolumeExpansion: true Warning The iSCSI CHAP credentials are in reality per iSCSI Target. Do NOT create multiple StorageClasses referencing different CHAP Secrets with different credentials for the same backend. It will result in a data outage with conflicting sessions. Ensure the same Secret is referenced in all StorageClasses using a particular backend.","title":"Per StorageClass iSCSI CHAP Credentials"},{"location":"csi_driver/using.html#provisioning_concepts","text":"These instructions are provided as an example on how to use the HPE CSI Driver with a CSP supported by HPE. Create a PersistentVolumeClaim from a StorageClass Ephemeral inline volumes Raw Block Volumes Using CSI Snapshots Volume Groups Snapshot Groups Expanding PVCs Using PVC Overrides Using Volume Mutations Using Volume Encryption Using the NFS Server Provisioner Using volume encryption Topology and volumeBindingMode Static Provisioning New to Kubernetes? There's a basic tutorial of how dynamic provisioning of persistent storage on Kubernetes works in the Video Gallery .","title":"Provisioning Concepts"},{"location":"csi_driver/using.html#create_a_persistentvolumeclaim_from_a_storageclass","text":"The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- < paste the YAML > ^D (CTRL + D) To get started, create a StorageClass API object referencing the CSI driver Secret relevant to the backend. These examples are for Kubernetes 1.15+ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-scod provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by the HPE CSI Driver for Kubernetes\" accessProtocol: iscsi reclaimPolicy: Delete allowVolumeExpansion: true Create a PersistentVolumeClaim . This object declaration ensures a PersistentVolume is created and provisioned on your behalf, make sure to reference the correct .spec.storageClassName : apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-first-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 32Gi storageClassName: hpe-scod Note In most environments, there is a default StorageClass declared on the cluster. In such a scenario, the .spec.storageClassName can be omitted. The default StorageClass is controlled by an annotation: .metadata.annotations.storageclass.kubernetes.io/is-default-class set to either \"true\" or \"false\" . After the PersistentVolumeClaim has been declared, check that a new PersistentVolume is created based on your claim: kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE pvc-13336da3-7... 32Gi RWO Delete Bound default/my-first-pvc hpe-scod 3s The above output means that the HPE CSI Driver successfully provisioned a new volume. The volume is not attached to any node yet. It will only be attached to a node if a scheduled workload requests the PersistentVolumeClaim . Now, let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted according to the specification. kind: Pod apiVersion: v1 metadata: name: my-pod spec: containers: - name: pod-datelog-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data - name: pod-datelog-2 image: debian command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data volumes: - name: export1 persistentVolumeClaim: claimName: my-first-pvc Check if the Pod is running successfully. kubectl get pod my-pod NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 2m29s Tip A simple Pod does not provide any automatic recovery if the node the Pod is scheduled on crashes or become unresponsive. Please see the official Kubernetes documentation for different workload types that provide automatic recovery. A shortlist of recommended workload types that are suitable for persistent storage is available in this blog post and best practices are outlined in this blog post .","title":"Create a PersistentVolumeClaim from a StorageClass"},{"location":"csi_driver/using.html#ephemeral_inline_volumes","text":"It's possible to declare a volume \"inline\" a Pod specification. The volume is ephemeral and only persists as long as the Pod is running. If the Pod gets rescheduled, deleted or upgraded, the volume is deleted and a new volume gets provisioned if it gets restarted. Ephemeral inline volumes are not associated with a StorageClass , hence a Secret needs to be provided inline with the volume. Warning Allowing user Pods to access the CSP Secret gives them the same privileges on the backend system as the HPE CSI Driver. There are two ways to declare the Secret with ephemeral inline volumes, either the Secret is in the same Namespace as the workload being declared or it resides in a foreign Namespace . Local Secret : apiVersion: v1 kind: Pod metadata: name: my-pod-inline-mount-1 spec: containers: - name: pod-datelog-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: my-volume-1 mountPath: /data volumes: - name: my-volume-1 csi: driver: csi.hpe.com nodePublishSecretRef: name: hpe-backend fsType: ext3 volumeAttributes: csi.storage.k8s.io/ephemeral: \"true\" accessProtocol: \"iscsi\" size: \"5Gi\" Foreign Secret : apiVersion: v1 kind: Pod metadata: name: my-pod-inline-mount-2 spec: containers: - name: pod-datelog-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: my-volume-1 mountPath: /data volumes: - name: my-volume-1 csi: driver: csi.hpe.com fsType: ext3 volumeAttributes: csi.storage.k8s.io/ephemeral: \"true\" inline-volume-secret-name: hpe-backend inline-volume-secret-namespace: hpe-storage accessProtocol: \"iscsi\" size: \"7Gi\" The parameters used in the examples are the bare minimum required parameters. Any parameters supported by the HPE CSI Driver and backend CSP may be used for ephemeral inline volumes. See the base StorageClass parameters or the respective CSP being used. Seealso For more elaborate use cases around ephemeral inline volumes, check out the tutorial on HPE Developer: Using Ephemeral Inline Volumes on Kubernetes","title":"Ephemeral Inline Volumes"},{"location":"csi_driver/using.html#raw_block_volumes","text":"The default volumeMode for a PersistentVolumeClaim is Filesystem . If a raw block volume is desired, volumeMode needs to be set to Block . No filesystem will be created. Example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-block spec: accessModes: - ReadWriteOnce resources: requests: storage: 32Gi storageClassName: hpe-scod volumeMode: Block Note The accessModes may be set to ReadWriteOnce , ReadWriteMany or ReadOnlyMany . It's expected that the application handles read/write IO, volume locking and access in the event of concurrent block access from multiple nodes. Consult the Alletra 6000 CSP documentation if using ReadWriteMany raw block volumes with FC on Nimble, Alletra 5000 or 6000. Mapping the device in a Pod specification is slightly different than using regular filesystems as a volumeDevices section is added instead of a volumeMounts stanza: apiVersion: v1 kind: Pod metadata: name: my-pod-block spec: containers: - name: my-null-pod image: fedora:31 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: - name: data devicePath: /dev/xvda volumes: - name: data persistentVolumeClaim: claimName: my-pvc-block Seealso There's an in-depth tutorial available on HPE Developer that covers raw block volumes: Using Raw Block Volumes on Kubernetes","title":"Raw Block Volumes"},{"location":"csi_driver/using.html#using_csi_snapshots","text":"CSI introduces snapshots as native objects in Kubernetes that allows end-users to provision VolumeSnapshot objects from an existing PersistentVolumeClaim . New PVCs may then be created using the snapshot as a source. Tip Ensure CSI snapshots are enabled . There's a tutorial in the Video Gallery on how to use CSI snapshots and clones. Start by creating a VolumeSnapshotClass referencing the Secret and defining additional snapshot parameters. apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: hpe-snapshot annotations: snapshot.storage.kubernetes.io/is-default-class: \"true\" driver: csi.hpe.com deletionPolicy: Delete parameters: description: \"Snapshot created by the HPE CSI Driver\" csi.storage.k8s.io/snapshotter-secret-name: hpe-backend csi.storage.k8s.io/snapshotter-secret-namespace: hpe-storage csi.storage.k8s.io/snapshotter-list-secret-name: hpe-backend csi.storage.k8s.io/snapshotter-list-secret-namespace: hpe-storage Note Container Storage Providers may have optional parameters to the VolumeSnapshotClass . Create a VolumeSnapshot . This will create a new snapshot of the volume. apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-snapshot spec: source: persistentVolumeClaimName: my-pvc Tip If a specific VolumeSnapshotClass is desired, use .spec.volumeSnapshotClassName to call it out. Check that a new VolumeSnapshot is created based on your claim: kubectl describe volumesnapshot my-snapshot Name: my-snapshot Namespace: default ... Status: Creation Time: 2019-05-22T15:51:28Z Ready: true Restore Size: 32Gi It's now possible to create a new PersistentVolumeClaim from the VolumeSnapshot . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-from-snapshot spec: dataSource: name: my-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 32Gi Important The size in .spec.resources.requests.storage must match the .spec.dataSource size. The .data.dataSource attribute may also clone PersistentVolumeClaim directly, without creating a VolumeSnapshot . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-from-pvc spec: dataSource: name: my-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 32Gi Again, the size in .spec.resources.requests.storage must match the source PersistentVolumeClaim . This can get sticky from an automation perspective as volume expansion is being used on the source volume. It's recommended to inspect the source PersistentVolumeClaim or VolumeSnapshot size prior to creating a clone. Learn more For a more comprehensive tutorial on how to use snapshots and clones with CSI on Kubernetes 1.17, see HPE CSI Driver for Kubernetes: Snapshots, Clones and Volume Expansion on HPE Developer.","title":"Using CSI Snapshots"},{"location":"csi_driver/using.html#volume_groups","text":"PersistentVolumeClaims created in a particular Namespace from the same storage backend may be grouped together in a VolumeGroup . A VolumeGroup is what may be known as a \"consistency group\" in other storage infrastructure systems. This allows certain attributes to be managed on a abstract group and attributes then applies to all member volumes in the group instead of managing each volume individually. One such aspect is creating snapshots with referential integrity between volumes or setting a performance attribute that would have accounting made on the logical group rather than the individual volume. Tip A tutorial on how to use VolumeGroups and SnapshotGroups is available in the Video Gallery . Before grouping PeristentVolumeClaims there needs to be a VolumeGroupClass created. It needs to reference a Secret that corresponds to the same backend the PersistentVolumeClaims were created on. A VolumeGroupClass is a cluster resource that needs administrative privileges to create. apiVersion: storage.hpe.com/v1 kind: VolumeGroupClass metadata: name: my-volume-group-class provisioner: csi.hpe.com deletionPolicy: Delete parameters: description: \"HPE CSI Driver for Kubernetes Volume Group\" csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage Note The VolumeGroupClass .parameters may contain CSP specifc parameters. Check the documentation of the Container Storage Provider being used. Once the VolumeGroupClass is in place, users may create VolumeGroups . The VolumeGroups are just like PersistentVolumeClaims part of a Namespace and both resources need to be in the same Namespace for the grouping to be successful. apiVersion: storage.hpe.com/v1 kind: VolumeGroup metadata: name: my-volume-group spec: volumeGroupClassName: my-volume-group-class Depending on the CSP being used, the VolumeGroup may reference an object that corresponds to the Kubernetes API object. It's not until users annotates their PersistentVolumeClaims the VolumeGroup gets populated. Adding a PersistentVolumeClaim to a VolumeGroup : kubectl annotate pvc/my-pvc csi.hpe.com/volume-group=my-volume-group Removing a PersistentVolumeClaim from a VolumeGroup : kubectl annotate pvc/my-pvc csi.hpe.com/volume-group- Tip While adding the PersistentVolumeClaim to the VolumeGroup is instant, removal require one reconciliation loop and might not immediately be reflected on the VolumeGroup object.","title":"Volume Groups"},{"location":"csi_driver/using.html#snapshot_groups","text":"Being able to create snapshots of the VolumeGroup require the CSI external-snapshotter to be installed and also require a VolumeSnapshotClass configured using the same storage backend as the VolumeGroup . Once those pieces are in place, a SnapshotGroupClass needs to be created. SnapshotGroupClasses are cluster objects created by an administrator. apiVersion: storage.hpe.com/v1 kind: SnapshotGroupClass metadata: name: my-snapshot-group-class snapshotter: csi.hpe.com deletionPolicy: Delete parameters: csi.hpe.com/snapshot-group-snapshotter-secret-name: hpe-backend csi.hpe.com/snapshot-group-snapshotter-secret-namespace: hpe-storage Creating a SnapshotGroup is later performed using the VolumeGroup as a source while referencing a SnapshotGroupClass and a VolumeSnapshotClass . apiVersion: storage.hpe.com/v1 kind: SnapshotGroup metadata: name: my-snapshot-group-1 spec: source: kind: VolumeGroup apiGroup: storage.hpe.com name: my-volume-group snapshotGroupClassName: my-snapshot-group-class volumeSnapshotClassName: hpe-snapshot Once the SnapshotGroup has been successfully created, the individual VolumeSnapshots are now available in the Namespace . List VolumeSnapshots : kubectl get volumesnapshots If no VolumeSnapshots are being enumerated, check the diagnostics on how to check the component logs and such. New feature! Volume Groups and Snapshot Groups got introduced in HPE CSI Driver for Kubernetes 1.4.0.","title":"Snapshot Groups"},{"location":"csi_driver/using.html#expanding_pvcs","text":"To perform expansion operations on Kubernetes 1.14+, you must enhance your StorageClass with the .allowVolumeExpansion: true key. Please see base StorageClass parameters for additional information. Then, a volume provisioned by a StorageClass with expansion attributes may have its PersistentVolumeClaim expanded by altering the .spec.resources.requests.storage key of the PersistentVolumeClaim . This may be done by the kubectl patch command. kubectl patch pvc/my-pvc --patch '{\"spec\": {\"resources\": {\"requests\": {\"storage\": \"64Gi\"}}}}' persistentvolumeclaim/my-pvc patched The new PersistentVolumeClaim size may be observed with kubectl get pvc/my-pvc after a few moments.","title":"Expanding PVCs"},{"location":"csi_driver/using.html#using_pvc_overrides","text":"The HPE CSI Driver allows the PersistentVolumeClaim to override the StorageClass parameters by annotating the PersistentVolumeClaim . Define the parameters allowed to be overridden in the StorageClass by setting the allowOverrides parameter: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-scod-override provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage description: \"Volume provisioned by the HPE CSI Driver\" accessProtocol: iscsi allowOverrides: description,accessProtocol The end-user may now control those parameters (the StorageClass provides the default values). apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-override annotations: csi.hpe.com/description: \"This is my custom description\" csi.hpe.com/accessProtocol: fc spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: hpe-scod-override","title":"Using PVC Overrides"},{"location":"csi_driver/using.html#using_volume_mutations","text":"The HPE CSI Driver (version 1.3.0 and later) allows the CSP backend volume to be mutated by annotating the PersistentVolumeClaim . Define the parameters allowed to be mutated in the StorageClass by setting the allowMutations parameter. Tip There's a tutorial available on YouTube accessible through the Video Gallery on how to use volume mutations to adapt stateful workloads with the HPE CSI Driver. Important In order to mutate a StorageClass parameter it needs to have a default value set in the StorageClass . In the example below we'll allow mutatating \"description\". If the parameter \"description\" wasn't set when the PersistentVolume was provisioned, no subsequent mutations are allowed. The CSP may set defaults for certain parameters during provisioning, if those are mutable, the mutation will be performed. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-scod-mutation provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage description: \"Volume provisioned by the HPE CSI Driver\" allowMutations: description Note The allowMutations parameter is a comma separated list of values defined by each of the CSPs parameters, except the description parameter, which is common across all CSPs. See the documentation for each CSP on what parameters are mutable. The end-user may now control those parameters by editing or patching the PersistentVolumeClaim . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-mutation annotations: csi.hpe.com/description: \"My description needs to change\" spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: hpe-scod-mutation Good to know As the .spec.csi.volumeAttributes on the PersistentVolume are immutable, the mutations performed on the backend volume are also annotated on the PersistentVolume object.","title":"Using Volume Mutations"},{"location":"csi_driver/using.html#using_the_nfs_server_provisioner","text":"Enabling the NFS Server Provisioner to allow \"ReadWriteMany\" and \"ReadOnlyMany\" access mode for a PVC is straightforward. Create a new StorageClass and set .parameters.nfsResources to \"true\" . Any subsequent claim to the StorageClass will create a NFS server Deployment on the cluster with the associated objects running on top of a \"ReadWriteOnce\" PVC . Any \"RWO\" claim made against the StorageClass will also create a NFS server Deployment . This allows diverse connectivity options among the Kubernetes worker nodes as the HPE CSI Driver will look for nodes labelled csi.hpe.com/hpe-nfs=true (or using a custom value specified in .parameters.nfsNodeSelector ) before submitting the workload for scheduling. This allows dedicated NFS worker nodes without user workloads using taints and tolerations. The NFS server Pod is armed with a csi.hpe.com/hpe-nfs toleration. It's required to taint dedicated NFS worker nodes if they truly need to be dedicated. By default, the NFS Server Provisioner deploy resources in the \"hpe-nfs\" Namespace . This makes it easy to manage and diagnose. However, to use CSI data management capabilities ( VolumeSnapshots and .spec.dataSource ) on the PVCs, the NFS resources need to be deployed in the same Namespace as the \"RWX\"/\"ROX\" requesting PVC . This is controlled by the nfsNamespace StorageClass parameter. See base StorageClass parameters for more information. Tip A comprehensive tutorial is available on HPE Developer on how to get started with the NFS Server Provisioner and the HPE CSI Driver for Kubernetes. There's also a brief tutorial available in the Video Gallery . Example StorageClass with \"nfsResources\" enabled. No CSP specific parameters for clarity. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-standard-file provisioner: csi.hpe.com parameters: csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"NFS backend volume created by the HPE CSI Driver for Kubernetes\" csi.storage.k8s.io/fstype: ext4 nfsResources: \"true\" reclaimPolicy: Delete allowVolumeExpansion: true Note Using XFS may result in stale NFS handles during node failures and outages. Always use ext4 for NFS PVCs . While \"allowVolumeExpansion\" isn't supported on the NFS PVC , the backend \"RWO\" PVC does. Example use of accessModes : ReadWriteOnce apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-rwo-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 32Gi storageClassName: hpe-nfs ReadWriteMany apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-rwx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 32Gi storageClassName: hpe-nfs ReadOnlyMany apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-rox-pvc spec: accessModes: - ReadOnlyMany resources: requests: storage: 32Gi storageClassName: hpe-nfs In the case of declaring a \"ROX\" PVC , the requesting Pod specification needs to request the PVC as read-only. Example: apiVersion: v1 kind: Pod metadata: name: pod-rox spec: containers: - image: busybox name: busybox command: - \"sleep\" - \"300\" volumeMounts: - mountPath: /data name: my-vol readOnly: true volumes: - name: my-vol persistentVolumeClaim: claimName: my-rox-pvc readOnly: true Requesting an empty read-only volume might not seem practical. The primary use case is to source existing datasets into immutable applications, using either a backend CSP cloning capability or CSI data management feature such as snapshots or existing PVCs .","title":"Using the NFS Server Provisioner"},{"location":"csi_driver/using.html#using_a_foreign_storageclass","text":"Since HPE CSI Driver for Kubernetes version 2.4.1 it's possible to provision NFS servers on top of non-HPE CSI Driver StorageClasses . The most prominent use case for this functionality is to coexist with the vSphere CSI Driver (VMware vSphere Container Storage Plug-in) in FC environments and provide \"RWX\" PVCs .","title":"Using a Foreign StorageClass"},{"location":"csi_driver/using.html#example_storageclass_using_a_foreign_storageclass","text":"The HPE CSI Driver only manages the NFS server Deployment , Service and PVC . There must be an existing StorageClass capable of provisioning \"RWO\" filesystem PVCs . apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-nfs-servers provisioner: csi.hpe.com parameters: nfsResources: \"true\" nfsForeignStorageClass: \"my-foreign-storageclass-name\" reclaimPolicy: Delete allowVolumeExpansion: false Next, provision \"RWO\" or \"RWX\" claims from the \"hpe-nfs-servers\" StorageClass . An NFS server will be provisioned on a \"RWO\" PVC from the StorageClass \"my-foreign-storageclass-name\". Note Only StorageClasses that uses HPE storage proxied by partner CSI drivers are supported by HPE.","title":"Example StorageClass using a foreign StorageClass"},{"location":"csi_driver/using.html#limitations_and_considerations_for_the_nfs_server_provisioner","text":"These are some common issues and gotchas that are useful to know about when planning to use the NFS Server Provisioner. The current tested and supported limit for the NFS Server Provisioner is 32 NFS servers per Kubernetes worker node. The two StorageClass parameters \"nfsResourceLimitsCpuM\" and \"nfsResourceLimitsMemoryMi\" control how much CPU and memory it may consume. Tests show that the NFS server consumes about 150MiB at instantiation and 2GiB is the recommended minimum for most workloads. The NFS server Pod is by default limited to 2GiB of memory and 1000 milli CPU. The NFS PVC can NOT be expanded. If more capacity is needed, expand the \"ReadWriteOnce\" PVC backing the NFS Server Provisioner. This will result in inaccurate space reporting. Due to the fact that the NFS Server Provisioner deploys a number of different resources on the hosting cluster per PVC , provisioning times may differ greatly between clusters. On an idle cluster with the NFS Server Provisioning image cached, less than 30 seconds is the most common sighting but it may exceed 30 seconds which may trigger warnings on the requesting PVC . This is normal behavior. The HPE CSI Driver includes a Pod Monitor to delete Pods that have become unavailable due to the Pod status changing to NodeLost or a node becoming unreachable that the Pod runs on. By default the Pod Monitor only watches the NFS Server Provisioner Deployments . It may be used for any Deployment . See Pod Monitor on how to use it, especially the limitations . Certain CNIs may have issues to gracefully restore access from the NFS clients to the NFS export. Flannel have exhibited this problem and the most consistent performance have been observed with Calico. The Volume Mutation feature does not work on the NFS PVC . If changes are needed, perform the change on the backing \"ReadWriteOnce\" PVC . As outlined in Using the NFS Server Provisioner , CSI snapshots and cloning of NFS PVCs requires the CSI snapshot and NFS server to reside in the same Namespace . This also applies when using third-party backup software such as Kasten K10. Use the \"nfsNamespace\" StorageClass parameter to control where to provision resources. VolumeGroups and SnapshotGroups are only supported on the backing \"ReadWriteOnce\" PVC . The \"volume-group\" annotation may be set at the initial creation of the NFS PVC but will have adverse effect on logging as the Volume Group Provisioner tries to add the NFS PVC to the backend consistency group indefinitely. The NFS servers deployed by the HPE CSI Driver are not managed during CSI driver upgrades. Manual upgrade is required . Using the same network interface for NFS and block IO has shown suboptimal performance. Use FC for the block storage for the best performance. A single NFS server instance is capable of 100GigE wirespeed with large sequential workloads and up to 200,000 IOPS with small IO using bare-metal nodes and multiple clients. Using ext4 as the backing filesystem has shown better performance with simultaneous writers to the same file. Additional configuration and considerations may be required when using the NFS Server Provisioner with Red Hat OpenShift. See NFS Server Provisioner Considerations for OpenShift. XFS has proven troublesome to use as a backend \"RWO\" volume filesystem, leaving stale NFS handles for clients. Use ext4 as the \"csi.storage.k8s.io/fstype\" StorageClass parameter for best results. The NFS servers provide a \"ClusterIP\" Service . It is possible to expose the NFS servers outside the cluster for external NFS clients. Understand the scope and limitations in Auxillary Operations . See diagnosing NFS Server Provisioner issues for further details.","title":"Limitations and Considerations for the NFS Server Provisioner"},{"location":"csi_driver/using.html#using_volume_encryption","text":"From version 2.0.0 and onwards of the CSI driver supports host-based volume encryption for any of the CSPs supported by the CSI driver. Host-based volume encryption is controlled by StorageClass parameters configured by the Kubernetes administrator and may be configured to be overridden by Kubernetes users. In the below example, a single Secret is used to encrypt and decrypt all volumes provisioned by the StorageClass . First, create a Secret , in this example we'll use the \"hpe-storage\" Namespace . apiVersion: v1 kind: Secret metadata: name: my-passphrase namespace: hpe-storage stringData: hostEncryptionPassphrase: \"HPE CSI Driver for Kubernetes 2.0.0 Rocks!\" Tip The \"hostEncryptionPassphrase\" can be up to 512 characters. Next, incorporate the Secret into a StorageClass . apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-encrypted provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage description: \"Volume provisioned by the HPE CSI Driver\" hostEncryption: \"true\" hostEncryptionSecretName: my-passphrase hostEncryptionSecretNamespace: hpe-storage reclaimPolicy: Delete allowVolumeExpansion: true Next, create a PersistentVolumeClaim that uses the \"hpe-encrypted\" StorageClass : apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-encrypted-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: hpe-encrypted Attach a basic Pod to verify functionality. kind: Pod apiVersion: v1 metadata: name: my-pod spec: containers: - name: pod-datelog-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data - name: pod-datelog-2 image: debian command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data volumes: - name: export1 persistentVolumeClaim: claimName: my-encrypted-pvc Once the Pod comes up, verify that the volume is encrypted. $ kubectl exec -it my-pod -c pod-datelog-1 -- df -h /data Filesystem Size Used Avail Use% Mounted on /dev/mapper/enc-mpatha 100G 33M 100G 1% /data Host-based volume encryption is in effect if the \"enc\" prefix is seen on the multipath device name. Seealso For an in-depth tutorial and more advanced use cases for host-based volume encryption, check out this blog post on HPE Developer: Host-based Volume Encryption with HPE CSI Driver for Kubernetes","title":"Using Volume Encryption"},{"location":"csi_driver/using.html#topology_and_volumebindingmode","text":"With CSI driver v2.5.0 and newer, basic CSI topology information can be associated with a single backend from a StorageClass . For backwards compatibility, only volumeBindingMode: WaitForFirstConsumer require topology labels assigned to compute nodes. Using the default volumeBindingMode of Immediate will preserve the behavior prior to v2.5.0. Tip The \"csi-provisioner\" is deployed with --feature-gates Topology=true and --immediate-topology=false . It's impact on volume provisioning and accessibility can be found here . Assume a simple use case where only a handful of nodes in a Kubernetes cluster have Fibre Channel adapters installed. Workloads with persistent storage requirements from a particular StorageClass should be deployed onto those nodes only.","title":"Topology and volumeBindingMode"},{"location":"csi_driver/using.html#label_compute_nodes","text":"Nodes with the label csi.hpe.com/zone are considered during topology accessibility assessments. Assume three nodes in the cluster have FC adapters. kubectl label node/my-node{1..3} csi.hpe.com/zone=fc --overwrite If the CSI driver is already installed on the cluster, the CSI node driver needs to be restarted for the node labels to propagate. kubectl rollout restart -n hpe-storage ds/hpe-csi-node","title":"Label Compute Nodes"},{"location":"csi_driver/using.html#create_storageclass_with_topology_information","text":"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\" name: hpe-standard-fc provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/controller-expand-secret-name: hpe-backend csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: hpe-backend csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: hpe-backend csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: hpe-backend csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/provisioner-secret-name: hpe-backend csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage description: \"Volume created by the HPE CSI Driver for Kubernetes\" accessProtocol: fc reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer allowedTopologies: - matchLabelExpressions: - key: csi.hpe.com/zone values: - fc Any workload provisioning PVCs from the above StorageClass will now be scheduled on nodes labeled csi.hpe.com/zone=fc . Note The allowedTopologies key may be omitted if there's only a single topology applied to a subset of nodes. The nodes always need to be labeled when using volumeBindingMode: WaitForFirstConsumer . If all nodes have access to a backend, set volumeBindingMode: Immediate and omit allowedTopologies .","title":"Create StorageClass with Topology Information"},{"location":"csi_driver/using.html#static_provisioning","text":"How to map an existing backend volume to a PersistentVolume differs between the CSP implementations. HPE Alletra 5000/6000 and Nimble Storage HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR","title":"Static Provisioning"},{"location":"csi_driver/using.html#further_reading","text":"The official Kubernetes documentation contains comprehensive documentation on how to markup PersistentVolumeClaim and StorageClass API objects to tweak certain behaviors. Each CSP has a set of unique StorageClass parameters that may be tweaked to accommodate a wide variety of use cases. Please see the documentation of the respective CSP for more details .","title":"Further Reading"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html","text":"Expired content The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within. Introduction \u00b6 This is the documentation for HPE Cloud Volumes Plugin for Docker . It allows dynamic provisioning of Docker Volumes on standalone Docker Engine or Docker Swarm nodes. Introduction Requirements Limitations Installation Plugin privileges Host configuration and installation Making changes Configuration files and options Node fencing Usage Create a Docker Volume Clone a Docker Volume Provisioning Docker Volumes Import a Volume to Docker Import a volume snapshot to Docker Restore an offline Docker Volume with specified snapshot List volumes Remove a Docker Volume Uninstall Troubleshooting Log file location Requirements \u00b6 Docker Engine 17.09 or greater If using Docker Enterprise Edition 2.x, the plugin is only supported in swarmmode Recent Red Hat, Debian or Ubuntu-based Linux distribution US regions only Plugin Release Notes 3.1.0 v3.1.0 Note Docker does not support certified and managed Docker Volume plugins with Docker EE Kubernetes. If you want to use Kubernetes on Docker EE with HPE Nimble Storage, please use the HPE Volume Driver for Kubernetes FlexVolume Plugin or the HPE CSI Driver for Kubernetes depending on your situation. Limitations \u00b6 HPE Cloud Volumes provides a Docker certified plugin delivered through the Docker Store. Certain features and capabilities are not available through the managed plugin. Please understand these limitations before deploying either of these plugins. The managed plugin does NOT provide: Support for Docker's release of Kubernetes in Docker Enterprise Edition 2.x Support for Windows Containers The managed plugin does provide a simple way to manage HPE Cloud Volumes integration on your Docker instances using Docker's interface to install and manage the plugin. Installation \u00b6 Plugin privileges \u00b6 In order to create connections, attach devices and mount file systems, the plugin requires more privileges than a standard application container. These privileges are enumerated during installation. These permissions need to be granted for the plugin to operate correctly. Plugin \"cvblock\" is requesting the following privileges: - network: [host] - mount: [/dev] - mount: [/run/lock] - mount: [/sys] - mount: [/etc] - mount: [/var/lib] - mount: [/var/run/docker.sock] - mount: [/sbin/iscsiadm] - mount: [/lib/modules] - mount: [/usr/lib64] - allow-all-devices: [true] - capabilities: [CAP_SYS_ADMIN CAP_SYS_MODULE CAP_MKNOD] Host configuration and installation \u00b6 Setting up the plugin varies between Linux distributions. These procedures requires root privileges on the cloud instance. Red Hat 7.5+, CentOS 7.5+: yum install -y iscsi-initiator-utils device-mapper-multipath docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0 docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME= PROVIDER_PASSWORD= docker plugin enable cvblock systemctl daemon-reload systemctl enable iscsid multipathd systemctl start iscsid multipathd Ubuntu 16.04 LTS and Ubuntu 18.04 LTS: apt-get install -y open-iscsi multipath-tools xfsprogs modprobe xfs sed -i\"\" -e \"\\$axfs\" /etc/modules docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0 docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME= PROVIDER_PASSWORD= glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable cvblock systemctl daemon-reload systemctl restart open-iscsi multipath-tools Debian 9.x (stable): apt-get install -y open-iscsi multipath-tools xfsprogs modprobe xfs sed -i\"\" -e \"\\$axfs\" /etc/modules docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0 docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME= PROVIDER_PASSWORD= iscsiadm.source=/usr/bin/iscsiadm glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable cvblock systemctl daemon-reload systemctl restart open-iscsi multipath-tools Making changes \u00b6 The docker plugin set command can only be used on the plugin if it is disabled. To disable the plugin, use the docker plugin disable command. For example: docker plugin disable cvblock List of parameters which are supported to be settable by the plugin Parameter Description Default PROVIDER_IP HPE Cloud Volumes portal \"\" PROVIDER_USERNAME HPE Cloud Volumes username \"\" PROVIDER_PASSWORD HPE Cloud Volumes password \"\" PROVIDER_REMOVE Unassociate Plugin from HPE Cloud Volumes false LOG_LEVEL Log level of the plugin ( info , debug , or trace ) debug SCOPE Scope of the plugin ( global or local ) global In the event of reassociating the plugin with a different HPE Cloud Volumes portal, certain procedures need to be followed: Disable the plugin docker plugin disable cvblock Set new paramters docker plugin set cvblock PROVIDER_REMOVE=true Enable the plugin docker plugin enable cvblock Disable the plugin docker plugin disable cvblock The plugin is now ready for re-configuration docker plugin set cvblock PROVIDER_IP=< New portal address > PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin PROVIDER_REMOVE=false Note The PROVIDER_REMOVE=false parameter must be set if the plugin ever has been unassociated from a HPE Cloud Volumes portal. Configuration files and options \u00b6 The configuration directory for the plugin is /etc/hpe-storage on the host. Files in this directory are preserved between plugin upgrades. The /etc/hpe-storage/volume-driver.json file contains three sections, global , defaults and overrides . The global options are plugin runtime parameters and doesn't have any end-user configurable keys at this time. The defaults map allows the docker host administrator to set default options during volume creation. The docker user may override these default options with their own values for a specific option. The overrides map allows the docker host administrator to enforce a certain option for every volume creation. The docker user may not override the option and any attempt to do so will be silently ignored. Note defaults and overrides are dynamically read during runtime while global changes require a plugin restart. Example config file in /etc/hpe-storage/volume-driver.json : { \"global\": { \"snapPrefix\": \"BaseFor\", \"initiators\": [\"eth0\"], \"automatedConnection\": true, \"existingCloudSubnet\": \"10.1.0.0/24\", \"region\": \"us-east-1\", \"privateCloud\": \"vpc-data\", \"cloudComputeProvider\": \"Amazon AWS\" }, \"defaults\": { \"limitIOPS\": 1000, \"fsOwner\": \"0:0\", \"fsMode\": \"600\", \"description\": \"Volume provisioned by the HPE Volume Driver for Kubernetes FlexVolume Plugin\", \"perfPolicy\": \"Other\", \"protectionTemplate\": \"twicedaily:4\", \"encryption\": true, \"volumeType\": \"PF\", \"destroyOnRm\": true }, \"overrides\": { } } For an exhaustive list of options use the help option from the docker CLI: $ docker volume create -d cvblock -o help Node fencing \u00b6 If you are considering using any Docker clustering technologies for your Docker deployment, it is important to understand the fencing mechanism used to protect data. Attaching the same Docker Volume to multiple containers on the same host is fully supported. Mounting the same volume on multiple hosts is not supported. Docker does not provide a fencing mechanism for nodes that have become disconnected from the Docker Swarm. This results in the isolated nodes continuing to run their containers. When the containers are rescheduled on a surviving node, the Docker Engine will request that the Docker Volume(s) be mounted. In order to prevent data corruption, the Docker Volume Plugin will stop serving the Docker Volume to the original node before mounting it on the newly requested node. During a mount request, the Docker Volume Plugin inspects the ACR (Access Control Record) on the volume. If the ACR does not match the initiator requesting to mount the volume, the ACR is removed and the volume taken offline. The volume is now fenced off and other nodes are unable to access any data in the volume. The volume then receives a new ACR matching the requesting initiator, and it is mounted for the container requesting the volume. This is done because the volumes are formatted with XFS, which is not a clustered filesystem and can be corrupted if the same volume is mounted to multiple hosts. The side effect of a fenced node is that I/O hangs indefinitely, and the initiator is rejected during login. If the fenced node rejoins the Docker Swarm using Docker SwarmKit, the swarm tries to shut down the services that were rescheduled elsewhere to maintain the desired replica set for the service. This operation will also hang indefinitely waiting for I/O. We recommend running a dedicated Docker host that does not host any other critical applications besides the Docker Engine. Doing this supports a safe way to reboot a node after a grace period and have it start cleanly when a hung task is detected. Otherwise, the node can remain in the hung state indefinitely. The following kernel parameters control the system behavior when a hung task is detected: # Reset after these many seconds after a panic kernel.panic = 5 # I do consider hung tasks reason enough to panic kernel.hung_task_panic = 1 # To not panic in vain, I'll wait these many seconds before I declare a hung task kernel.hung_task_timeout_secs = 150 Add these parameters to the /etc/sysctl.d/99-hung_task_timeout.conf file and reboot the system. Important Docker SwarmKit declares a node as failed after five (5) seconds. Services are then rescheduled and up and running again in less than ten (10) seconds. The parameters noted above provide the system a way to manage other tasks that may appear to be hung and avoid a system panic. Usage \u00b6 These are some basic examples on how to use the HPE Cloud Volumes Plugin for Docker. Create a Docker Volume \u00b6 Using docker volume create . Note The plugin applies a set of default options when you create new volumes unless you override them using the volume create -o key=value option flags. Create a Docker volume with a custom description: docker volume create -d cvblock -o description=\"My volume description\" --name myvol1 (Optional) Inspect the new volume: docker volume inspect myvol1 (Optional) Attach the volume to an interactive container. docker run -it --rm -v myvol1:/data bash The volume is mounted inside the container on /data . Clone a Docker Volume \u00b6 Use the docker volume create command with the cloneOf option to clone a Docker volume to a new Docker volume. Clone the Docker volume named myvol1 to a new Docker volume named myvol1-clone . docker volume create -d cvblock -o cloneOf=myvol1 --name=myvol1-clone (Optional) Select a snapshot on which to base the clone. docker volume create -d cvblock -o snapshot=mysnap1 -o cloneOf=myvol1 --name=myvol2-clone Provisioning Docker Volumes \u00b6 There are several ways to provision a Docker volume depending on what tools are used: Docker Engine (CLI) Docker Compose file with either Docker UCP or Docker Engine The Docker Volume plugin leverages the existing Docker CLI and APIs, therefor all native Docker tools may be used to provision a volume. Note The plugin applies a set of default volume create options. Unless you override the default options using the volume option flags, the defaults are applied when you create volumes. For example, the default volume size is 10GiB. Config file volume-driver.json , which is stored at /etc/hpe-storage/volume-driver.json : { \"global\": {}, \"defaults\": { \"sizeInGiB\":\"10\", \"limitIOPS\":\"-1\", \"limitMBPS\":\"-1\", \"perfPolicy\": \"DockerDefault\", }, \"overrides\":{} } Import a Volume to Docker \u00b6 Take the volume you want to import offline before importing it. For information about how to take a volume offline, refer to the HPE Cloud Volumes documentation . Use the create command with the importVol option to import an HPE Cloud Volume to Docker and name it. Import the HPE Cloud Volume named mycloudvol as a Docker volume named myvol3-imported . docker volume create \u2013d cvblock -o importVol=mycloudvol --name=myvol3-imported Import a volume snapshot to Docker \u00b6 Use the create command with the importVolAsClone option to import a HPE Cloud Volume snapshot as a Docker volume. Optionally, specify a particular snapshot on the HPE Cloud Volume using the snapshot option. Import the HPE Cloud Volumes snapshot aSnapshot on the volume importMe as a Docker volume named importedSnap . docker volume create -d cvblock -o importVolAsClone=mycloudvol -o snapshot=mysnap1 --name=myvol4-clone Note If no snapshot is specified, the latest snapshot on the volume is imported. Restore an offline Docker Volume with specified snapshot \u00b6 It's important that the volume to be restored is in an offline state on the array. If the volume snapshot is not specified, the last volume snapshot is used. docker volume create -d cvblock -o importVol=myvol1.docker -o forceImport -o restore -o snapshot=mysnap1 --name=myvol1-restored List volumes \u00b6 List Docker volumes. docker volume ls DRIVER VOLUME NAME cvblock:latest myvol1 cvblock:latest myvol1-clone Remove a Docker Volume \u00b6 When you remove volumes from Docker control they are set to the offline state on the array. Access to the volumes and related snapshots using the Docker Volume plugin can be reestablished. Note To delete volumes from the HPE Cloud Volumes portal using the remove command, the volume should have been created with a -o destroyOnRm flag. Important: Be aware that when this option is set to true, volumes and all related snapshots are deleted from the group, and can no longer be accessed by the Docker Volume plugin. Remove the volume named myvol1 . docker volume rm myvol1 Uninstall \u00b6 The plugin can be removed using the docker plugin rm command. This command will not remove the configuration directory ( /etc/hpe-storage/ ). docker plugin rm cvblock Troubleshooting \u00b6 The config directory is at /etc/hpe-storage/ . When a plugin is installed and enabled, the HPE Cloud Volumes certificates are created in the config directory. ls -l /etc/hpe-storage/ total 16 -r-------- 1 root root 1159 Aug 2 00:20 container_provider_host.cert -r-------- 1 root root 1671 Aug 2 00:20 container_provider_host.key -r-------- 1 root root 1521 Aug 2 00:20 container_provider_server.cert Additionally there is a config file volume-driver.json present at the same location. This file can be edited to set default parameters for create volumes for docker. Log file location \u00b6 The docker plugin logs are located at /var/log/hpe-docker-plugin.log","title":"Index"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#introduction","text":"This is the documentation for HPE Cloud Volumes Plugin for Docker . It allows dynamic provisioning of Docker Volumes on standalone Docker Engine or Docker Swarm nodes. Introduction Requirements Limitations Installation Plugin privileges Host configuration and installation Making changes Configuration files and options Node fencing Usage Create a Docker Volume Clone a Docker Volume Provisioning Docker Volumes Import a Volume to Docker Import a volume snapshot to Docker Restore an offline Docker Volume with specified snapshot List volumes Remove a Docker Volume Uninstall Troubleshooting Log file location","title":"Introduction"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#requirements","text":"Docker Engine 17.09 or greater If using Docker Enterprise Edition 2.x, the plugin is only supported in swarmmode Recent Red Hat, Debian or Ubuntu-based Linux distribution US regions only Plugin Release Notes 3.1.0 v3.1.0 Note Docker does not support certified and managed Docker Volume plugins with Docker EE Kubernetes. If you want to use Kubernetes on Docker EE with HPE Nimble Storage, please use the HPE Volume Driver for Kubernetes FlexVolume Plugin or the HPE CSI Driver for Kubernetes depending on your situation.","title":"Requirements"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#limitations","text":"HPE Cloud Volumes provides a Docker certified plugin delivered through the Docker Store. Certain features and capabilities are not available through the managed plugin. Please understand these limitations before deploying either of these plugins. The managed plugin does NOT provide: Support for Docker's release of Kubernetes in Docker Enterprise Edition 2.x Support for Windows Containers The managed plugin does provide a simple way to manage HPE Cloud Volumes integration on your Docker instances using Docker's interface to install and manage the plugin.","title":"Limitations"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#installation","text":"","title":"Installation"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#plugin_privileges","text":"In order to create connections, attach devices and mount file systems, the plugin requires more privileges than a standard application container. These privileges are enumerated during installation. These permissions need to be granted for the plugin to operate correctly. Plugin \"cvblock\" is requesting the following privileges: - network: [host] - mount: [/dev] - mount: [/run/lock] - mount: [/sys] - mount: [/etc] - mount: [/var/lib] - mount: [/var/run/docker.sock] - mount: [/sbin/iscsiadm] - mount: [/lib/modules] - mount: [/usr/lib64] - allow-all-devices: [true] - capabilities: [CAP_SYS_ADMIN CAP_SYS_MODULE CAP_MKNOD]","title":"Plugin privileges"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#host_configuration_and_installation","text":"Setting up the plugin varies between Linux distributions. These procedures requires root privileges on the cloud instance. Red Hat 7.5+, CentOS 7.5+: yum install -y iscsi-initiator-utils device-mapper-multipath docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0 docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME= PROVIDER_PASSWORD= docker plugin enable cvblock systemctl daemon-reload systemctl enable iscsid multipathd systemctl start iscsid multipathd Ubuntu 16.04 LTS and Ubuntu 18.04 LTS: apt-get install -y open-iscsi multipath-tools xfsprogs modprobe xfs sed -i\"\" -e \"\\$axfs\" /etc/modules docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0 docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME= PROVIDER_PASSWORD= glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable cvblock systemctl daemon-reload systemctl restart open-iscsi multipath-tools Debian 9.x (stable): apt-get install -y open-iscsi multipath-tools xfsprogs modprobe xfs sed -i\"\" -e \"\\$axfs\" /etc/modules docker plugin install --disable --grant-all-permissions --alias cvblock store/hpestorage/cvblock:3.1.0 docker plugin set cv PROVIDER_IP=cloudvolumes.hpe.com PROVIDER_USERNAME= PROVIDER_PASSWORD= iscsiadm.source=/usr/bin/iscsiadm glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable cvblock systemctl daemon-reload systemctl restart open-iscsi multipath-tools","title":"Host configuration and installation"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#making_changes","text":"The docker plugin set command can only be used on the plugin if it is disabled. To disable the plugin, use the docker plugin disable command. For example: docker plugin disable cvblock List of parameters which are supported to be settable by the plugin Parameter Description Default PROVIDER_IP HPE Cloud Volumes portal \"\" PROVIDER_USERNAME HPE Cloud Volumes username \"\" PROVIDER_PASSWORD HPE Cloud Volumes password \"\" PROVIDER_REMOVE Unassociate Plugin from HPE Cloud Volumes false LOG_LEVEL Log level of the plugin ( info , debug , or trace ) debug SCOPE Scope of the plugin ( global or local ) global In the event of reassociating the plugin with a different HPE Cloud Volumes portal, certain procedures need to be followed: Disable the plugin docker plugin disable cvblock Set new paramters docker plugin set cvblock PROVIDER_REMOVE=true Enable the plugin docker plugin enable cvblock Disable the plugin docker plugin disable cvblock The plugin is now ready for re-configuration docker plugin set cvblock PROVIDER_IP=< New portal address > PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin PROVIDER_REMOVE=false Note The PROVIDER_REMOVE=false parameter must be set if the plugin ever has been unassociated from a HPE Cloud Volumes portal.","title":"Making changes"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#configuration_files_and_options","text":"The configuration directory for the plugin is /etc/hpe-storage on the host. Files in this directory are preserved between plugin upgrades. The /etc/hpe-storage/volume-driver.json file contains three sections, global , defaults and overrides . The global options are plugin runtime parameters and doesn't have any end-user configurable keys at this time. The defaults map allows the docker host administrator to set default options during volume creation. The docker user may override these default options with their own values for a specific option. The overrides map allows the docker host administrator to enforce a certain option for every volume creation. The docker user may not override the option and any attempt to do so will be silently ignored. Note defaults and overrides are dynamically read during runtime while global changes require a plugin restart. Example config file in /etc/hpe-storage/volume-driver.json : { \"global\": { \"snapPrefix\": \"BaseFor\", \"initiators\": [\"eth0\"], \"automatedConnection\": true, \"existingCloudSubnet\": \"10.1.0.0/24\", \"region\": \"us-east-1\", \"privateCloud\": \"vpc-data\", \"cloudComputeProvider\": \"Amazon AWS\" }, \"defaults\": { \"limitIOPS\": 1000, \"fsOwner\": \"0:0\", \"fsMode\": \"600\", \"description\": \"Volume provisioned by the HPE Volume Driver for Kubernetes FlexVolume Plugin\", \"perfPolicy\": \"Other\", \"protectionTemplate\": \"twicedaily:4\", \"encryption\": true, \"volumeType\": \"PF\", \"destroyOnRm\": true }, \"overrides\": { } } For an exhaustive list of options use the help option from the docker CLI: $ docker volume create -d cvblock -o help","title":"Configuration files and options"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#node_fencing","text":"If you are considering using any Docker clustering technologies for your Docker deployment, it is important to understand the fencing mechanism used to protect data. Attaching the same Docker Volume to multiple containers on the same host is fully supported. Mounting the same volume on multiple hosts is not supported. Docker does not provide a fencing mechanism for nodes that have become disconnected from the Docker Swarm. This results in the isolated nodes continuing to run their containers. When the containers are rescheduled on a surviving node, the Docker Engine will request that the Docker Volume(s) be mounted. In order to prevent data corruption, the Docker Volume Plugin will stop serving the Docker Volume to the original node before mounting it on the newly requested node. During a mount request, the Docker Volume Plugin inspects the ACR (Access Control Record) on the volume. If the ACR does not match the initiator requesting to mount the volume, the ACR is removed and the volume taken offline. The volume is now fenced off and other nodes are unable to access any data in the volume. The volume then receives a new ACR matching the requesting initiator, and it is mounted for the container requesting the volume. This is done because the volumes are formatted with XFS, which is not a clustered filesystem and can be corrupted if the same volume is mounted to multiple hosts. The side effect of a fenced node is that I/O hangs indefinitely, and the initiator is rejected during login. If the fenced node rejoins the Docker Swarm using Docker SwarmKit, the swarm tries to shut down the services that were rescheduled elsewhere to maintain the desired replica set for the service. This operation will also hang indefinitely waiting for I/O. We recommend running a dedicated Docker host that does not host any other critical applications besides the Docker Engine. Doing this supports a safe way to reboot a node after a grace period and have it start cleanly when a hung task is detected. Otherwise, the node can remain in the hung state indefinitely. The following kernel parameters control the system behavior when a hung task is detected: # Reset after these many seconds after a panic kernel.panic = 5 # I do consider hung tasks reason enough to panic kernel.hung_task_panic = 1 # To not panic in vain, I'll wait these many seconds before I declare a hung task kernel.hung_task_timeout_secs = 150 Add these parameters to the /etc/sysctl.d/99-hung_task_timeout.conf file and reboot the system. Important Docker SwarmKit declares a node as failed after five (5) seconds. Services are then rescheduled and up and running again in less than ten (10) seconds. The parameters noted above provide the system a way to manage other tasks that may appear to be hung and avoid a system panic.","title":"Node fencing"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#usage","text":"These are some basic examples on how to use the HPE Cloud Volumes Plugin for Docker.","title":"Usage"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#create_a_docker_volume","text":"Using docker volume create . Note The plugin applies a set of default options when you create new volumes unless you override them using the volume create -o key=value option flags. Create a Docker volume with a custom description: docker volume create -d cvblock -o description=\"My volume description\" --name myvol1 (Optional) Inspect the new volume: docker volume inspect myvol1 (Optional) Attach the volume to an interactive container. docker run -it --rm -v myvol1:/data bash The volume is mounted inside the container on /data .","title":"Create a Docker Volume"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#clone_a_docker_volume","text":"Use the docker volume create command with the cloneOf option to clone a Docker volume to a new Docker volume. Clone the Docker volume named myvol1 to a new Docker volume named myvol1-clone . docker volume create -d cvblock -o cloneOf=myvol1 --name=myvol1-clone (Optional) Select a snapshot on which to base the clone. docker volume create -d cvblock -o snapshot=mysnap1 -o cloneOf=myvol1 --name=myvol2-clone","title":"Clone a Docker Volume"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#provisioning_docker_volumes","text":"There are several ways to provision a Docker volume depending on what tools are used: Docker Engine (CLI) Docker Compose file with either Docker UCP or Docker Engine The Docker Volume plugin leverages the existing Docker CLI and APIs, therefor all native Docker tools may be used to provision a volume. Note The plugin applies a set of default volume create options. Unless you override the default options using the volume option flags, the defaults are applied when you create volumes. For example, the default volume size is 10GiB. Config file volume-driver.json , which is stored at /etc/hpe-storage/volume-driver.json : { \"global\": {}, \"defaults\": { \"sizeInGiB\":\"10\", \"limitIOPS\":\"-1\", \"limitMBPS\":\"-1\", \"perfPolicy\": \"DockerDefault\", }, \"overrides\":{} }","title":"Provisioning Docker Volumes"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#import_a_volume_to_docker","text":"Take the volume you want to import offline before importing it. For information about how to take a volume offline, refer to the HPE Cloud Volumes documentation . Use the create command with the importVol option to import an HPE Cloud Volume to Docker and name it. Import the HPE Cloud Volume named mycloudvol as a Docker volume named myvol3-imported . docker volume create \u2013d cvblock -o importVol=mycloudvol --name=myvol3-imported","title":"Import a Volume to Docker"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#import_a_volume_snapshot_to_docker","text":"Use the create command with the importVolAsClone option to import a HPE Cloud Volume snapshot as a Docker volume. Optionally, specify a particular snapshot on the HPE Cloud Volume using the snapshot option. Import the HPE Cloud Volumes snapshot aSnapshot on the volume importMe as a Docker volume named importedSnap . docker volume create -d cvblock -o importVolAsClone=mycloudvol -o snapshot=mysnap1 --name=myvol4-clone Note If no snapshot is specified, the latest snapshot on the volume is imported.","title":"Import a volume snapshot to Docker"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#restore_an_offline_docker_volume_with_specified_snapshot","text":"It's important that the volume to be restored is in an offline state on the array. If the volume snapshot is not specified, the last volume snapshot is used. docker volume create -d cvblock -o importVol=myvol1.docker -o forceImport -o restore -o snapshot=mysnap1 --name=myvol1-restored","title":"Restore an offline Docker Volume with specified snapshot"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#list_volumes","text":"List Docker volumes. docker volume ls DRIVER VOLUME NAME cvblock:latest myvol1 cvblock:latest myvol1-clone","title":"List volumes"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#remove_a_docker_volume","text":"When you remove volumes from Docker control they are set to the offline state on the array. Access to the volumes and related snapshots using the Docker Volume plugin can be reestablished. Note To delete volumes from the HPE Cloud Volumes portal using the remove command, the volume should have been created with a -o destroyOnRm flag. Important: Be aware that when this option is set to true, volumes and all related snapshots are deleted from the group, and can no longer be accessed by the Docker Volume plugin. Remove the volume named myvol1 . docker volume rm myvol1","title":"Remove a Docker Volume"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#uninstall","text":"The plugin can be removed using the docker plugin rm command. This command will not remove the configuration directory ( /etc/hpe-storage/ ). docker plugin rm cvblock","title":"Uninstall"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#troubleshooting","text":"The config directory is at /etc/hpe-storage/ . When a plugin is installed and enabled, the HPE Cloud Volumes certificates are created in the config directory. ls -l /etc/hpe-storage/ total 16 -r-------- 1 root root 1159 Aug 2 00:20 container_provider_host.cert -r-------- 1 root root 1671 Aug 2 00:20 container_provider_host.key -r-------- 1 root root 1521 Aug 2 00:20 container_provider_server.cert Additionally there is a config file volume-driver.json present at the same location. This file can be edited to set default parameters for create volumes for docker.","title":"Troubleshooting"},{"location":"docker_volume_plugins/hpe_cloud_volumes/index.html#log_file_location","text":"The docker plugin logs are located at /var/log/hpe-docker-plugin.log","title":"Log file location"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html","text":"Expired content The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within. Introduction \u00b6 This is the documentation for HPE Nimble Storage Volume Plugin for Docker . It allows dynamic provisioning of Docker Volumes on standalone Docker Engine or Docker Swarm nodes. Introduction Requirements Limitations Installation Plugin privileges Host configuration and installation Making changes Security considerations Configuration files and options Node fencing Usage Create a Docker Volume Clone a Docker Volume Provisioning Docker Volumes Import a volume to Docker Import a volume snapshot to Docker Restore an offline Docker Volume with specified snapshot List volumes Remove a Docker Volume Uninstall Troubleshooting Log file location Upgrade from older plugins Requirements \u00b6 Docker Engine 17.09 or greater If using Docker Enterprise Edition 2.x, the plugin is only supported in swarmmode Recent Red Hat, Debian or Ubuntu-based Linux distribution NimbleOS 5.0.8/5.1.3 or greater Plugin HPE Nimble Storage Version Release Notes 3.0.0 5.0.8.x and 5.1.3.x onwards v3.0.0 3.1.0 5.0.8.x and 5.1.3.x onwards v3.1.0 Note Docker does not support certified and managed Docker Volume plugins with Docker EE Kubernetes. If you want to use Kubernetes on Docker EE with HPE Nimble Storage, please use the HPE Volume Driver for Kubernetes FlexVolume Plugin or the HPE CSI Driver for Kubernetes depending on your situation. Limitations \u00b6 HPE Nimble Storage provides a Docker certified plugin delivered through the Docker Store. HPE Nimble Storage also provides a Docker Volume plugin for Windows Containers, it's available on HPE InfoSight along with its documentation . Certain features and capabilities are not available through the managed plugin. Please understand these limitations before deploying either of these plugins. The managed plugin does NOT provide: Support for Docker's release of Kubernetes in Docker Enterprise Edition 2.x Support for older versions of NimbleOS (all versions below 5.x) Support for Windows Containers The managed plugin does provide a simple way to manage HPE Nimble Storage on your Docker hosts using Docker's interface to install and manage the plugin. Installation \u00b6 Plugin privileges \u00b6 In order to create connections, attach devices and mount file systems, the plugin requires more privileges than a standard application container. These privileges are enumerated during installation. These permissions need to be granted for the plugin to operate correctly. Plugin \"nimble\" is requesting the following privileges: - network: [host] - mount: [/dev] - mount: [/run/lock] - mount: [/sys] - mount: [/etc] - mount: [/var/lib] - mount: [/var/run/docker.sock] - mount: [/sbin/iscsiadm] - mount: [/lib/modules] - mount: [/usr/lib64] - allow-all-devices: [true] - capabilities: [CAP_SYS_ADMIN CAP_SYS_MODULE CAP_MKNOD] Host configuration and installation \u00b6 Setting up the plugin varies between Linux distributions. The following workflows have been tested using a Nimble iSCSI group array at 192.168.171.74 with PROVIDER_USERNAME admin and PROVIDER_PASSWORD admin : These procedures require root privileges. Red Hat 7.5+, CentOS 7.5+: yum install -y iscsi-initiator-utils device-mapper-multipath docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0 docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin docker plugin enable nimble systemctl daemon-reload systemctl enable iscsid multipathd systemctl start iscsid multipathd Ubuntu 16.04 LTS and Ubuntu 18.04 LTS: apt-get install -y open-iscsi multipath-tools xfsprogs modprobe xfs sed -i\"\" -e \"\\$axfs\" /etc/modules docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0 docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable nimble systemctl daemon-reload systemctl restart open-iscsi multipath-tools Debian 9.x (stable): apt-get install -y open-iscsi multipath-tools xfsprogs modprobe xfs sed -i\"\" -e \"\\$axfs\" /etc/modules docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0 docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin iscsiadm.source=/usr/bin/iscsiadm glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable nimble systemctl daemon-reload systemctl restart open-iscsi multipath-tools NOTE: To use the plugin on Fibre Channel environments use the PROTOCOL=FC environment variable. Making changes \u00b6 The docker plugin set command can only be used on the plugin if it is disabled. To disable the plugin, use the docker plugin disable command. For example: docker plugin disable nimble List of parameters which are supported to be settable by the plugin. Parameter Description Default PROVIDER_IP HPE Nimble Storage array ip \"\" PROVIDER_USERNAME HPE Nimble Storage array username \"\" PROVIDER_PASSWORD HPE Nimble Storage array password \"\" PROVIDER_REMOVE Unassociate Plugin from HPE Nimble Storage array false LOG_LEVEL Log level of the plugin ( info , debug , or trace ) debug SCOPE Scope of the plugin ( global or local ) global PROTOCOL Scsi protocol supported by the plugin ( iscsi or fc ) iscsi Security considerations \u00b6 The HPE Nimble Storage credentials are visible to any user who can execute docker plugin inspect nimble . To limit credential visibility, the variables should be unset after certificates have been generated. The following set of steps can be used to accomplish this: Add the credentials docker plugin set PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin Start the plugin docker plugin enable nimble Stop the plugin docker plugin disable nimble Remove the credentials docker plugin set nimble PROVIDER_USERNAME=\"true\" PROVIDER_PASSWORD=\"true\" Start the plugin docker plugin enable nimble Note Certificates are stored in /etc/hpe-storage/ on the host and will be preserved across plugin updates. In the event of reassociating the plugin with a different HPE Nimble Storage group, certain procedures need to be followed: Disable the plugin docker plugin disable nimble Set new paramters docker plugin set nimble PROVIDER_REMOVE=true Enable the plugin docker plugin enable nimble Disable the plugin docker plugin disable nimble The plugin is now ready for re-configuration docker plugin set nimble PROVIDER_IP=< New IP address > PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin PROVIDER_REMOVE=false Note: The PROVIDER_REMOVE=false parameter must be set if the plugin ever has been unassociated from a HPE Nimble Storage group. Configuration files and options \u00b6 The configuration directory for the plugin is /etc/hpe-storage on the host. Files in this directory are preserved between plugin upgrades. The /etc/hpe-storage/volume-driver.json file contains three sections, global , defaults and overrides . The global options are plugin runtime parameters and doesn't have any end-user configurable keys at this time. The defaults map allows the docker host administrator to set default options during volume creation. The docker user may override these default options with their own values for a specific option. The overrides map allows the docker host administrator to enforce a certain option for every volume creation. The docker user may not override the option and any attempt to do so will be silently ignored. These maps are essential to discuss with the HPE Nimble Storage administrator. A common pattern is that a default protection template is selected for all volumes to fulfill a certain data protection policy enforced by the business it's serving. Another useful option is to override the volume placement options to allow a single HPE Nimble Storage array to provide multi-tenancy for docker environments. Note: defaults and overrides are dynamically read during runtime while global changes require a plugin restart. Below is an example /etc/hpe-storage/volume-driver.json outlining the above use cases: { \"global\": { \"nameSuffix\": \".docker\" }, \"defaults\": { \"description\": \"Volume provisioned by Docker\", \"protectionTemplate\": \"Retain-90Daily\" }, \"overrides\": { \"folder\": \"docker-prod\" } } For an exhaustive list of options use the help option from the docker CLI: $ docker volume create -d nimble -o help Nimble Storage Docker Volume Driver: Create Help Create or Clone a Nimble Storage backed Docker Volume or Import an existing Nimble Volume or Clone of a Snapshot into Docker. Universal options: -o mountConflictDelay=X X is the number of seconds to delay a mount request when there is a conflict (default is 0) Create options: -o sizeInGiB=X X is the size of volume specified in GiB -o size=X X is the size of volume specified in GiB (short form of sizeInGiB) -o fsOwner=X X is the user id and group id that should own the root directory of the filesystem, in the form of [userId:groupId] -o fsMode=X X is 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem -o description=X X is the text to be added to volume description (optional) -o perfPolicy=X X is the name of the performance policy (optional) Performance Policies: Exchange 2003 data store, Exchange log, Exchange 2007 data store, SQL Server, SharePoint, Exchange 2010 data store, SQL Server Logs, SQL Server 2012, Oracle OLTP, Windows File Server, Other Workloads, DockerDefault, General, MariaDB, Veeam Backup Repository, Backup Repository -o pool=X X is the name of pool in which to place the volume Needed with -o folder (optional) -o folder=X X is the name of folder in which to place the volume Needed with -o pool (optional). -o encryption indicates that the volume should be encrypted (optional, dedupe and encryption are mutually exclusive) -o thick indicates that the volume should be thick provisioned (optional, dedupe and thick are mutually exclusive) -o dedupe indicates that the volume should be deduplicated -o limitIOPS=X X is the IOPS limit of the volume. IOPS limit should be in range [256, 4294967294] or -1 for unlimited. -o limitMBPS=X X is the MB/s throughput limit for this volume. If both limitIOPS and limitMBPS are specified, limitMBPS must not be hit before limitIOPS -o destroyOnRm indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is deleted -o syncOnUnmount only valid with \"protectionTemplate\", if the protectionTemplate includes a replica destination, unmount calls will snapshot and transfer the last delta to the destination. (optional) -o protectionTemplate=X X is the name of the protection template (optional) Protection Templates: General, Retain-90Daily, Retain-30Daily, Retain-48Hourly-30Daily-52Weekly Clone options: -o cloneOf=X X is the name of Docker Volume to create a clone of -o snapshot=X X is the name of the snapshot to base the clone on (optional, if missing, a new snapshot is created) -o createSnapshot indicates that a new snapshot of the volume should be taken and used for the clone (optional) -o destroyOnRm indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is deleted -o destroyOnDetach indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is unmounted or detached Import Volume options: -o importVol=X X is the name of the Nimble Volume to import -o pool=X X is the name of the pool in which the volume to be imported resides (optional) -o folder=X X is the name of the folder in which the volume to be imported resides (optional) -o forceImport forces the import of the volume. Note that overwrites application metadata (optional) -o restore restores the volume to the last snapshot taken on the volume (optional) -o snapshot=X X is the name of the snapshot which the volume will be restored to, only used with -o restore (optional) -o takeover indicates the current group will takeover the ownership of the Nimble volume and volume collection (optional) -o reverseRepl reverses the replication direction so that writes to the Nimble volume are replicated back to the group where it was replicated from (optional) Import Clone of Snapshot options: -o importVolAsClone=X X is the name of the Nimble Volume and Nimble Snapshot to clone and import -o snapshot=X X is the name of the Nimble snapshot to clone and import (optional, if missing, will use the most recent snapshot) -o createSnapshot indicates that a new snapshot of the volume should be taken and used for the clone (optional) -o pool=X X is the name of the pool in which the volume to be imported resides (optional) -o folder=X X is the name of the folder in which the volume to be imported resides (optional) -o destroyOnRm indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is deleted -o destroyOnDetach indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is unmounted or detached Node fencing \u00b6 If you are considering using any Docker clustering technologies for your Docker deployment, it is important to understand the fencing mechanism used to protect data. Attaching the same Docker Volume to multiple containers on the same host is fully supported. Mounting the same volume on multiple hosts is not supported. Docker does not provide a fencing mechanism for nodes that have become disconnected from the Docker Swarm. This results in the isolated nodes continuing to run their containers. When the containers are rescheduled on a surviving node, the Docker Engine will request that the Docker Volume(s) be mounted. In order to prevent data corruption, the Docker Volume Plugin will stop serving the Docker Volume to the original node before mounting it on the newly requested node. During a mount request, the Docker Volume Plugin inspects the ACR (Access Control Record) on the volume. If the ACR does not match the initiator requesting to mount the volume, the ACR is removed and the volume taken offline. The volume is now fenced off and other nodes are unable to access any data in the volume. The volume then receives a new ACR matching the requesting initiator, and it is mounted for the container requesting the volume. This is done because the volumes are formatted with XFS, which is not a clustered filesystem and can be corrupted if the same volume is mounted to multiple hosts. The side effect of a fenced node is that I/O hangs indefinitely, and the initiator is rejected during login. If the fenced node rejoins the Docker Swarm using Docker SwarmKit, the swarm tries to shut down the services that were rescheduled elsewhere to maintain the desired replica set for the service. This operation will also hang indefinitely waiting for I/O. We recommend running a dedicated Docker host that does not host any other critical applications besides the Docker Engine. Doing this supports a safe way to reboot a node after a grace period and have it start cleanly when a hung task is detected. Otherwise, the node can remain in the hung state indefinitely. The following kernel parameters control the system behavior when a hung task is detected: # Reset after these many seconds after a panic kernel.panic = 5 # I do consider hung tasks reason enough to panic kernel.hung_task_panic = 1 # To not panic in vain, I'll wait these many seconds before I declare a hung task kernel.hung_task_timeout_secs = 150 Add these parameters to the /etc/sysctl.d/99-hung_task_timeout.conf file and reboot the system. Important Docker SwarmKit declares a node as failed after five (5) seconds. Services are then rescheduled and up and running again in less than ten (10) seconds. The parameters noted above provide the system a way to manage other tasks that may appear to be hung and avoid a system panic. Usage \u00b6 These are some basic examples on how to use the HPE Nimble Storage Volume Plugin for Docker. Create a Docker Volume \u00b6 Using docker volume create . Note The plugin applies a set of default options when you create new volumes unless you override them using the volume create -o key=value option flags. Create a Docker volume with a custom description: docker volume create -d nimble -o description=\"My volume description\" --name myvol1 (Optional) Inspect the new volume: docker volume inspect myvol1 (Optional) Attach the volume to an interactive container. docker run -it --rm -v myvol1:/data bash The volume is mounted inside the container on /data . Clone a Docker Volume \u00b6 Use the docker volume create command with the cloneOf option to clone a Docker volume to a new Docker volume. Clone the Docker volume named myvol1 to a new Docker volume named myvol1-clone . docker volume create -d nimble -o cloneOf=myvol1 --name=myvol1-clone (Optional) Select a snapshot on which to base the clone. docker volume create -d nimble -o snapshot=mysnap1 -o cloneOf=myvol1 --name=myvol2-clone Provisioning Docker Volumes \u00b6 There are several ways to provision a Docker volume depending on what tools are used: Docker Engine (CLI) Docker Compose file with either Docker UCP or Docker Engine The Docker Volume plugin leverages the existing Docker CLI and APIs, therefor all native Docker tools may be used to provision a volume. Note The plugin applies a set of default volume create options. Unless you override the default options using the volume option flags, the defaults are applied when you create volumes. For example, the default volume size is 10GiB. Config file volume-driver.json , which is stored at /etc/hpe-storage/volume-driver.json: { \"global\": {}, \"defaults\": { \"sizeInGiB\":\"10\", \"limitIOPS\":\"-1\", \"limitMBPS\":\"-1\", \"perfPolicy\": \"DockerDefault\", }, \"overrides\":{} } Import a volume to Docker \u00b6 Before you begin Take the volume you want to import offline before importing it. For information about how to take a volume offline, refer to either the CLI Administration Guide or the GUI Administration Guide on HPE InfoSight . Use the create command with the importVol option to import an HPE Nimble Storage volume to Docker and name it. Import the HPE Nimble Storage volume named mynimblevol as a Docker volume named myvol3-imported . docker volume create \u2013d nimble -o importVol=mynimblevol --name=myvol3-imported Import a volume snapshot to Docker \u00b6 Use the create command with the importVolAsClone option to import a HPE Nimble Storage volume snapshot as a Docker volume. Optionally, specify a particular snapshot on the HPE Nimble Storage volume using the snapshot option. Import the HPE Nimble Storage snapshot aSnapshot on the volume importMe as a Docker volume named importedSnap . docker volume create -d nimble -o importVolAsClone=mynimblevol -o snapshot=mysnap1 --name=myvol4-clone Note If no snapshot is specified, the latest snapshot on the volume is imported. Restore an offline Docker Volume with specified snapshot \u00b6 It's important that the volume to be restored is in an offline state on the array. If the volume snapshot is not specified, the last volume snapshot is used. docker volume create -d nimble -o importVol=myvol1.docker -o forceImport -o restore -o snapshot=mysnap1 --name=myvol1-restored List volumes \u00b6 List Docker volumes. docker volume ls DRIVER VOLUME NAME nimble:latest myvol1 nimble:latest myvol1-clone Remove a Docker Volume \u00b6 When you remove volumes from Docker control they are set to the offline state on the array. Access to the volumes and related snapshots using the Docker Volume plugin can be reestablished. Note To delete volumes from the HPE Nimble Storage array using the remove command, the volume should have been created with a -o destroyOnRm flag. Important: Be aware that when this option is set to true, volumes and all related snapshots are deleted from the group, and can no longer be accessed by the Docker Volume plugin. Remove the volume named myvol1 . docker volume rm myvol1 Uninstall \u00b6 The plugin can be removed using the docker plugin rm command. This command will not remove the configuration directory ( /etc/hpe-storage/ ). docker plugin rm nimble Important If this is the last plugin to reference the Nimble Group and to completely remove the configuration directory, follow the steps as below docker plugin set nimble PROVIDER_REMOVE=true docker plugin enable nimble docker plugin rm nimble Troubleshooting \u00b6 The config directory is at /etc/hpe-storage/ . When a plugin is installed and enabled, the Nimble Group certificates are created in the config directory. ls -l /etc/hpe-storage/ total 16 -r-------- 1 root root 1159 Aug 2 00:20 container_provider_host.cert -r-------- 1 root root 1671 Aug 2 00:20 container_provider_host.key -r-------- 1 root root 1521 Aug 2 00:20 container_provider_server.cert Additionally there is a config file volume-driver.json present at the same location. This file can be edited to set default parameters for create volumes for docker. Log file location \u00b6 The docker plugin logs are located at /var/log/hpe-docker-plugin.log Upgrade from older plugins \u00b6 Upgrading from 2.5.1 or older plugins, please follow the below steps Ubuntu 16.04 LTS and Ubuntu 18.04 LTS: docker plugin disable nimble:latest \u2013f docker plugin upgrade --grant-all-permissions nimble store/hpestorage/nimble:3.0.0 --skip-remote-check docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable nimble:latest Red Hat 7.5+, CentOS 7.5+, Oracle Enterprise Linux 7.5+ and Fedora 28+: docker plugin disable nimble:latest \u2013f docker plugin upgrade --grant-all-permissions nimble store/hpestorage/nimble:3.0.0 --skip-remote-check docker plugin enable nimble:latest Important In Swarm Mode, drain the existing running containers to the node where the plugin is upgraded.","title":"Index"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#introduction","text":"This is the documentation for HPE Nimble Storage Volume Plugin for Docker . It allows dynamic provisioning of Docker Volumes on standalone Docker Engine or Docker Swarm nodes. Introduction Requirements Limitations Installation Plugin privileges Host configuration and installation Making changes Security considerations Configuration files and options Node fencing Usage Create a Docker Volume Clone a Docker Volume Provisioning Docker Volumes Import a volume to Docker Import a volume snapshot to Docker Restore an offline Docker Volume with specified snapshot List volumes Remove a Docker Volume Uninstall Troubleshooting Log file location Upgrade from older plugins","title":"Introduction"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#requirements","text":"Docker Engine 17.09 or greater If using Docker Enterprise Edition 2.x, the plugin is only supported in swarmmode Recent Red Hat, Debian or Ubuntu-based Linux distribution NimbleOS 5.0.8/5.1.3 or greater Plugin HPE Nimble Storage Version Release Notes 3.0.0 5.0.8.x and 5.1.3.x onwards v3.0.0 3.1.0 5.0.8.x and 5.1.3.x onwards v3.1.0 Note Docker does not support certified and managed Docker Volume plugins with Docker EE Kubernetes. If you want to use Kubernetes on Docker EE with HPE Nimble Storage, please use the HPE Volume Driver for Kubernetes FlexVolume Plugin or the HPE CSI Driver for Kubernetes depending on your situation.","title":"Requirements"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#limitations","text":"HPE Nimble Storage provides a Docker certified plugin delivered through the Docker Store. HPE Nimble Storage also provides a Docker Volume plugin for Windows Containers, it's available on HPE InfoSight along with its documentation . Certain features and capabilities are not available through the managed plugin. Please understand these limitations before deploying either of these plugins. The managed plugin does NOT provide: Support for Docker's release of Kubernetes in Docker Enterprise Edition 2.x Support for older versions of NimbleOS (all versions below 5.x) Support for Windows Containers The managed plugin does provide a simple way to manage HPE Nimble Storage on your Docker hosts using Docker's interface to install and manage the plugin.","title":"Limitations"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#installation","text":"","title":"Installation"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#plugin_privileges","text":"In order to create connections, attach devices and mount file systems, the plugin requires more privileges than a standard application container. These privileges are enumerated during installation. These permissions need to be granted for the plugin to operate correctly. Plugin \"nimble\" is requesting the following privileges: - network: [host] - mount: [/dev] - mount: [/run/lock] - mount: [/sys] - mount: [/etc] - mount: [/var/lib] - mount: [/var/run/docker.sock] - mount: [/sbin/iscsiadm] - mount: [/lib/modules] - mount: [/usr/lib64] - allow-all-devices: [true] - capabilities: [CAP_SYS_ADMIN CAP_SYS_MODULE CAP_MKNOD]","title":"Plugin privileges"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#host_configuration_and_installation","text":"Setting up the plugin varies between Linux distributions. The following workflows have been tested using a Nimble iSCSI group array at 192.168.171.74 with PROVIDER_USERNAME admin and PROVIDER_PASSWORD admin : These procedures require root privileges. Red Hat 7.5+, CentOS 7.5+: yum install -y iscsi-initiator-utils device-mapper-multipath docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0 docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin docker plugin enable nimble systemctl daemon-reload systemctl enable iscsid multipathd systemctl start iscsid multipathd Ubuntu 16.04 LTS and Ubuntu 18.04 LTS: apt-get install -y open-iscsi multipath-tools xfsprogs modprobe xfs sed -i\"\" -e \"\\$axfs\" /etc/modules docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0 docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable nimble systemctl daemon-reload systemctl restart open-iscsi multipath-tools Debian 9.x (stable): apt-get install -y open-iscsi multipath-tools xfsprogs modprobe xfs sed -i\"\" -e \"\\$axfs\" /etc/modules docker plugin install --disable --grant-all-permissions --alias nimble store/nimblestorage/nimble:3.1.0 docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin iscsiadm.source=/usr/bin/iscsiadm glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable nimble systemctl daemon-reload systemctl restart open-iscsi multipath-tools NOTE: To use the plugin on Fibre Channel environments use the PROTOCOL=FC environment variable.","title":"Host configuration and installation"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#making_changes","text":"The docker plugin set command can only be used on the plugin if it is disabled. To disable the plugin, use the docker plugin disable command. For example: docker plugin disable nimble List of parameters which are supported to be settable by the plugin. Parameter Description Default PROVIDER_IP HPE Nimble Storage array ip \"\" PROVIDER_USERNAME HPE Nimble Storage array username \"\" PROVIDER_PASSWORD HPE Nimble Storage array password \"\" PROVIDER_REMOVE Unassociate Plugin from HPE Nimble Storage array false LOG_LEVEL Log level of the plugin ( info , debug , or trace ) debug SCOPE Scope of the plugin ( global or local ) global PROTOCOL Scsi protocol supported by the plugin ( iscsi or fc ) iscsi","title":"Making changes"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#security_considerations","text":"The HPE Nimble Storage credentials are visible to any user who can execute docker plugin inspect nimble . To limit credential visibility, the variables should be unset after certificates have been generated. The following set of steps can be used to accomplish this: Add the credentials docker plugin set PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin Start the plugin docker plugin enable nimble Stop the plugin docker plugin disable nimble Remove the credentials docker plugin set nimble PROVIDER_USERNAME=\"true\" PROVIDER_PASSWORD=\"true\" Start the plugin docker plugin enable nimble Note Certificates are stored in /etc/hpe-storage/ on the host and will be preserved across plugin updates. In the event of reassociating the plugin with a different HPE Nimble Storage group, certain procedures need to be followed: Disable the plugin docker plugin disable nimble Set new paramters docker plugin set nimble PROVIDER_REMOVE=true Enable the plugin docker plugin enable nimble Disable the plugin docker plugin disable nimble The plugin is now ready for re-configuration docker plugin set nimble PROVIDER_IP=< New IP address > PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin PROVIDER_REMOVE=false Note: The PROVIDER_REMOVE=false parameter must be set if the plugin ever has been unassociated from a HPE Nimble Storage group.","title":"Security considerations"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#configuration_files_and_options","text":"The configuration directory for the plugin is /etc/hpe-storage on the host. Files in this directory are preserved between plugin upgrades. The /etc/hpe-storage/volume-driver.json file contains three sections, global , defaults and overrides . The global options are plugin runtime parameters and doesn't have any end-user configurable keys at this time. The defaults map allows the docker host administrator to set default options during volume creation. The docker user may override these default options with their own values for a specific option. The overrides map allows the docker host administrator to enforce a certain option for every volume creation. The docker user may not override the option and any attempt to do so will be silently ignored. These maps are essential to discuss with the HPE Nimble Storage administrator. A common pattern is that a default protection template is selected for all volumes to fulfill a certain data protection policy enforced by the business it's serving. Another useful option is to override the volume placement options to allow a single HPE Nimble Storage array to provide multi-tenancy for docker environments. Note: defaults and overrides are dynamically read during runtime while global changes require a plugin restart. Below is an example /etc/hpe-storage/volume-driver.json outlining the above use cases: { \"global\": { \"nameSuffix\": \".docker\" }, \"defaults\": { \"description\": \"Volume provisioned by Docker\", \"protectionTemplate\": \"Retain-90Daily\" }, \"overrides\": { \"folder\": \"docker-prod\" } } For an exhaustive list of options use the help option from the docker CLI: $ docker volume create -d nimble -o help Nimble Storage Docker Volume Driver: Create Help Create or Clone a Nimble Storage backed Docker Volume or Import an existing Nimble Volume or Clone of a Snapshot into Docker. Universal options: -o mountConflictDelay=X X is the number of seconds to delay a mount request when there is a conflict (default is 0) Create options: -o sizeInGiB=X X is the size of volume specified in GiB -o size=X X is the size of volume specified in GiB (short form of sizeInGiB) -o fsOwner=X X is the user id and group id that should own the root directory of the filesystem, in the form of [userId:groupId] -o fsMode=X X is 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem -o description=X X is the text to be added to volume description (optional) -o perfPolicy=X X is the name of the performance policy (optional) Performance Policies: Exchange 2003 data store, Exchange log, Exchange 2007 data store, SQL Server, SharePoint, Exchange 2010 data store, SQL Server Logs, SQL Server 2012, Oracle OLTP, Windows File Server, Other Workloads, DockerDefault, General, MariaDB, Veeam Backup Repository, Backup Repository -o pool=X X is the name of pool in which to place the volume Needed with -o folder (optional) -o folder=X X is the name of folder in which to place the volume Needed with -o pool (optional). -o encryption indicates that the volume should be encrypted (optional, dedupe and encryption are mutually exclusive) -o thick indicates that the volume should be thick provisioned (optional, dedupe and thick are mutually exclusive) -o dedupe indicates that the volume should be deduplicated -o limitIOPS=X X is the IOPS limit of the volume. IOPS limit should be in range [256, 4294967294] or -1 for unlimited. -o limitMBPS=X X is the MB/s throughput limit for this volume. If both limitIOPS and limitMBPS are specified, limitMBPS must not be hit before limitIOPS -o destroyOnRm indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is deleted -o syncOnUnmount only valid with \"protectionTemplate\", if the protectionTemplate includes a replica destination, unmount calls will snapshot and transfer the last delta to the destination. (optional) -o protectionTemplate=X X is the name of the protection template (optional) Protection Templates: General, Retain-90Daily, Retain-30Daily, Retain-48Hourly-30Daily-52Weekly Clone options: -o cloneOf=X X is the name of Docker Volume to create a clone of -o snapshot=X X is the name of the snapshot to base the clone on (optional, if missing, a new snapshot is created) -o createSnapshot indicates that a new snapshot of the volume should be taken and used for the clone (optional) -o destroyOnRm indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is deleted -o destroyOnDetach indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is unmounted or detached Import Volume options: -o importVol=X X is the name of the Nimble Volume to import -o pool=X X is the name of the pool in which the volume to be imported resides (optional) -o folder=X X is the name of the folder in which the volume to be imported resides (optional) -o forceImport forces the import of the volume. Note that overwrites application metadata (optional) -o restore restores the volume to the last snapshot taken on the volume (optional) -o snapshot=X X is the name of the snapshot which the volume will be restored to, only used with -o restore (optional) -o takeover indicates the current group will takeover the ownership of the Nimble volume and volume collection (optional) -o reverseRepl reverses the replication direction so that writes to the Nimble volume are replicated back to the group where it was replicated from (optional) Import Clone of Snapshot options: -o importVolAsClone=X X is the name of the Nimble Volume and Nimble Snapshot to clone and import -o snapshot=X X is the name of the Nimble snapshot to clone and import (optional, if missing, will use the most recent snapshot) -o createSnapshot indicates that a new snapshot of the volume should be taken and used for the clone (optional) -o pool=X X is the name of the pool in which the volume to be imported resides (optional) -o folder=X X is the name of the folder in which the volume to be imported resides (optional) -o destroyOnRm indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is deleted -o destroyOnDetach indicates that the Nimble volume (including snapshots) backing this volume should be destroyed when this volume is unmounted or detached","title":"Configuration files and options"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#node_fencing","text":"If you are considering using any Docker clustering technologies for your Docker deployment, it is important to understand the fencing mechanism used to protect data. Attaching the same Docker Volume to multiple containers on the same host is fully supported. Mounting the same volume on multiple hosts is not supported. Docker does not provide a fencing mechanism for nodes that have become disconnected from the Docker Swarm. This results in the isolated nodes continuing to run their containers. When the containers are rescheduled on a surviving node, the Docker Engine will request that the Docker Volume(s) be mounted. In order to prevent data corruption, the Docker Volume Plugin will stop serving the Docker Volume to the original node before mounting it on the newly requested node. During a mount request, the Docker Volume Plugin inspects the ACR (Access Control Record) on the volume. If the ACR does not match the initiator requesting to mount the volume, the ACR is removed and the volume taken offline. The volume is now fenced off and other nodes are unable to access any data in the volume. The volume then receives a new ACR matching the requesting initiator, and it is mounted for the container requesting the volume. This is done because the volumes are formatted with XFS, which is not a clustered filesystem and can be corrupted if the same volume is mounted to multiple hosts. The side effect of a fenced node is that I/O hangs indefinitely, and the initiator is rejected during login. If the fenced node rejoins the Docker Swarm using Docker SwarmKit, the swarm tries to shut down the services that were rescheduled elsewhere to maintain the desired replica set for the service. This operation will also hang indefinitely waiting for I/O. We recommend running a dedicated Docker host that does not host any other critical applications besides the Docker Engine. Doing this supports a safe way to reboot a node after a grace period and have it start cleanly when a hung task is detected. Otherwise, the node can remain in the hung state indefinitely. The following kernel parameters control the system behavior when a hung task is detected: # Reset after these many seconds after a panic kernel.panic = 5 # I do consider hung tasks reason enough to panic kernel.hung_task_panic = 1 # To not panic in vain, I'll wait these many seconds before I declare a hung task kernel.hung_task_timeout_secs = 150 Add these parameters to the /etc/sysctl.d/99-hung_task_timeout.conf file and reboot the system. Important Docker SwarmKit declares a node as failed after five (5) seconds. Services are then rescheduled and up and running again in less than ten (10) seconds. The parameters noted above provide the system a way to manage other tasks that may appear to be hung and avoid a system panic.","title":"Node fencing"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#usage","text":"These are some basic examples on how to use the HPE Nimble Storage Volume Plugin for Docker.","title":"Usage"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#create_a_docker_volume","text":"Using docker volume create . Note The plugin applies a set of default options when you create new volumes unless you override them using the volume create -o key=value option flags. Create a Docker volume with a custom description: docker volume create -d nimble -o description=\"My volume description\" --name myvol1 (Optional) Inspect the new volume: docker volume inspect myvol1 (Optional) Attach the volume to an interactive container. docker run -it --rm -v myvol1:/data bash The volume is mounted inside the container on /data .","title":"Create a Docker Volume"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#clone_a_docker_volume","text":"Use the docker volume create command with the cloneOf option to clone a Docker volume to a new Docker volume. Clone the Docker volume named myvol1 to a new Docker volume named myvol1-clone . docker volume create -d nimble -o cloneOf=myvol1 --name=myvol1-clone (Optional) Select a snapshot on which to base the clone. docker volume create -d nimble -o snapshot=mysnap1 -o cloneOf=myvol1 --name=myvol2-clone","title":"Clone a Docker Volume"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#provisioning_docker_volumes","text":"There are several ways to provision a Docker volume depending on what tools are used: Docker Engine (CLI) Docker Compose file with either Docker UCP or Docker Engine The Docker Volume plugin leverages the existing Docker CLI and APIs, therefor all native Docker tools may be used to provision a volume. Note The plugin applies a set of default volume create options. Unless you override the default options using the volume option flags, the defaults are applied when you create volumes. For example, the default volume size is 10GiB. Config file volume-driver.json , which is stored at /etc/hpe-storage/volume-driver.json: { \"global\": {}, \"defaults\": { \"sizeInGiB\":\"10\", \"limitIOPS\":\"-1\", \"limitMBPS\":\"-1\", \"perfPolicy\": \"DockerDefault\", }, \"overrides\":{} }","title":"Provisioning Docker Volumes"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#import_a_volume_to_docker","text":"Before you begin Take the volume you want to import offline before importing it. For information about how to take a volume offline, refer to either the CLI Administration Guide or the GUI Administration Guide on HPE InfoSight . Use the create command with the importVol option to import an HPE Nimble Storage volume to Docker and name it. Import the HPE Nimble Storage volume named mynimblevol as a Docker volume named myvol3-imported . docker volume create \u2013d nimble -o importVol=mynimblevol --name=myvol3-imported","title":"Import a volume to Docker"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#import_a_volume_snapshot_to_docker","text":"Use the create command with the importVolAsClone option to import a HPE Nimble Storage volume snapshot as a Docker volume. Optionally, specify a particular snapshot on the HPE Nimble Storage volume using the snapshot option. Import the HPE Nimble Storage snapshot aSnapshot on the volume importMe as a Docker volume named importedSnap . docker volume create -d nimble -o importVolAsClone=mynimblevol -o snapshot=mysnap1 --name=myvol4-clone Note If no snapshot is specified, the latest snapshot on the volume is imported.","title":"Import a volume snapshot to Docker"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#restore_an_offline_docker_volume_with_specified_snapshot","text":"It's important that the volume to be restored is in an offline state on the array. If the volume snapshot is not specified, the last volume snapshot is used. docker volume create -d nimble -o importVol=myvol1.docker -o forceImport -o restore -o snapshot=mysnap1 --name=myvol1-restored","title":"Restore an offline Docker Volume with specified snapshot"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#list_volumes","text":"List Docker volumes. docker volume ls DRIVER VOLUME NAME nimble:latest myvol1 nimble:latest myvol1-clone","title":"List volumes"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#remove_a_docker_volume","text":"When you remove volumes from Docker control they are set to the offline state on the array. Access to the volumes and related snapshots using the Docker Volume plugin can be reestablished. Note To delete volumes from the HPE Nimble Storage array using the remove command, the volume should have been created with a -o destroyOnRm flag. Important: Be aware that when this option is set to true, volumes and all related snapshots are deleted from the group, and can no longer be accessed by the Docker Volume plugin. Remove the volume named myvol1 . docker volume rm myvol1","title":"Remove a Docker Volume"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#uninstall","text":"The plugin can be removed using the docker plugin rm command. This command will not remove the configuration directory ( /etc/hpe-storage/ ). docker plugin rm nimble Important If this is the last plugin to reference the Nimble Group and to completely remove the configuration directory, follow the steps as below docker plugin set nimble PROVIDER_REMOVE=true docker plugin enable nimble docker plugin rm nimble","title":"Uninstall"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#troubleshooting","text":"The config directory is at /etc/hpe-storage/ . When a plugin is installed and enabled, the Nimble Group certificates are created in the config directory. ls -l /etc/hpe-storage/ total 16 -r-------- 1 root root 1159 Aug 2 00:20 container_provider_host.cert -r-------- 1 root root 1671 Aug 2 00:20 container_provider_host.key -r-------- 1 root root 1521 Aug 2 00:20 container_provider_server.cert Additionally there is a config file volume-driver.json present at the same location. This file can be edited to set default parameters for create volumes for docker.","title":"Troubleshooting"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#log_file_location","text":"The docker plugin logs are located at /var/log/hpe-docker-plugin.log","title":"Log file location"},{"location":"docker_volume_plugins/hpe_nimble_storage/index.html#upgrade_from_older_plugins","text":"Upgrading from 2.5.1 or older plugins, please follow the below steps Ubuntu 16.04 LTS and Ubuntu 18.04 LTS: docker plugin disable nimble:latest \u2013f docker plugin upgrade --grant-all-permissions nimble store/hpestorage/nimble:3.0.0 --skip-remote-check docker plugin set nimble PROVIDER_IP=192.168.1.1 PROVIDER_USERNAME=admin PROVIDER_PASSWORD=admin glibc_libs.source=/lib/x86_64-linux-gnu docker plugin enable nimble:latest Red Hat 7.5+, CentOS 7.5+, Oracle Enterprise Linux 7.5+ and Fedora 28+: docker plugin disable nimble:latest \u2013f docker plugin upgrade --grant-all-permissions nimble store/hpestorage/nimble:3.0.0 --skip-remote-check docker plugin enable nimble:latest Important In Swarm Mode, drain the existing running containers to the node where the plugin is upgraded.","title":"Upgrade from older plugins"},{"location":"ezmeral/install.html","text":"Introduction \u00b6 HPE Ezmeral Runtime Enterprise deploys and manages open source upstream Kubernetes clusters through its management console. It's also capable of importing foreign Kubernetes clusters. This guide describes the necessary steps to perform a successful deployment of the HPE CSI Driver for Kubernetes on HPE Ezmeral Runtime Enterprise managed clusters. Prerequisites \u00b6 It's up to the HPE Ezmeral Runtime Enterprise administrator who deploys Kubernetes clusters to ensure that the particular version of the CSI driver (i.e v2.0.0) is supported with the following components. HPE Ezmeral Runtime Enterprise worker node host operating system HPE Ezmeral Runtime Enterprise deployed Kubernetes cluster version Examine the table found in the Compatibility and Support section of the CSI driver overview. Particular Container Storage Providers may have additional prerequisites. Version 5.4.0 and later \u00b6 In Ezmeral 5.4.0 and later, an exception has been added to the \"hpe-storage\" Namespace . Proceed to Installation and disregard any steps outlined in this guide. Note If the HPE CSI Driver built-in NFS Server Provisioner will be used, an exception needs to be granted to the \"hpe-nfs\" Namespace . Run: kubectl patch --type json -p '[{\"op\": \"add\", \"path\": \"/spec/match/excludedNamespaces/-\", \"value\": \"hpe-nfs\"}]' k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-privileged-container Version 5.3.0 \u00b6 The CSI driver needs privileged access to the worker nodes to attach and detach storage devices. By default, an admission controller prevents all user deployed workloads access to the host filesystem. An exception needs to be created for the \"hpe-storage\" Namespace . As a Kubernetes cluster admin, run the following. kubectl create ns hpe-storage kubectl patch --type json -p '[{\"op\":\"add\",\"path\":\"/spec/unrestrictedFsMountNamespaces/-\",\"value\":\"hpe-storage\"}]' hpecpconfigs/hpecp-global-config -n hpecp Caution In theory you may use any Namespace name desired. This might change in a future release and it's encouraged to use \"hpe-storage\" for compatibility with upcoming releases of HPE Ezmeral Runtime Enterprise. By not performing this configuration change, the following events will be seen on the CSI controller ReplicaSet or CSI node DaemonSet trying to schedule Pods . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 2m4s (x17 over 7m32s) replicaset-controller Error creating: admission webhook \"soft-validate.hpecp.hpe.com\" denied the request: Hostpath (\"/\") referenced in volume is not valid for this namespace because of FS Mount protections. Version 5.2.0 or earlier \u00b6 Early versions of HPE Ezmeral Runtime Enterprise (HPE Container Platform, HPE Ezmeral Container Platform) contained a checkbox to deploy the HPE CSI Driver for Kubernetes. This method is not supported. Make sure clusters are deployed without the checkbox ticked. Continue with Installation . Installation \u00b6 Any method to install the HPE CSI Driver for Kubernetes on an HPE Ezmeral Runtime Enterprise managed Kubernetes cluster is supported. Helm is strongly recommended. Make sure to deploy the CSI driver to the \"hpe-storage\" Namespace for future compatibility. HPE CSI Driver for Kubernetes Helm chart on Artifact Hub (recommended) HPE CSI Operator for Kubernetes on OperatorHub.io Advanced Install using YAML manifests Important In some deployments of Ezmeral the kubelet root has been relocated, in those circumstances you'll see errors similar to: Error: command mount failed with rc=32 err=mount: /dev/mapper/mpathh is already mounted or /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-uuid busy /dev/mapper/mpathh is already mounted on /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-uuid . In this case it's recommended to install the CSI driver using Helm with the --set kubeletRootDir=/var/lib/docker/kubelet parameter.","title":"Install HPE CSI Driver"},{"location":"ezmeral/install.html#introduction","text":"HPE Ezmeral Runtime Enterprise deploys and manages open source upstream Kubernetes clusters through its management console. It's also capable of importing foreign Kubernetes clusters. This guide describes the necessary steps to perform a successful deployment of the HPE CSI Driver for Kubernetes on HPE Ezmeral Runtime Enterprise managed clusters.","title":"Introduction"},{"location":"ezmeral/install.html#prerequisites","text":"It's up to the HPE Ezmeral Runtime Enterprise administrator who deploys Kubernetes clusters to ensure that the particular version of the CSI driver (i.e v2.0.0) is supported with the following components. HPE Ezmeral Runtime Enterprise worker node host operating system HPE Ezmeral Runtime Enterprise deployed Kubernetes cluster version Examine the table found in the Compatibility and Support section of the CSI driver overview. Particular Container Storage Providers may have additional prerequisites.","title":"Prerequisites"},{"location":"ezmeral/install.html#version_540_and_later","text":"In Ezmeral 5.4.0 and later, an exception has been added to the \"hpe-storage\" Namespace . Proceed to Installation and disregard any steps outlined in this guide. Note If the HPE CSI Driver built-in NFS Server Provisioner will be used, an exception needs to be granted to the \"hpe-nfs\" Namespace . Run: kubectl patch --type json -p '[{\"op\": \"add\", \"path\": \"/spec/match/excludedNamespaces/-\", \"value\": \"hpe-nfs\"}]' k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-privileged-container","title":"Version 5.4.0 and later"},{"location":"ezmeral/install.html#version_530","text":"The CSI driver needs privileged access to the worker nodes to attach and detach storage devices. By default, an admission controller prevents all user deployed workloads access to the host filesystem. An exception needs to be created for the \"hpe-storage\" Namespace . As a Kubernetes cluster admin, run the following. kubectl create ns hpe-storage kubectl patch --type json -p '[{\"op\":\"add\",\"path\":\"/spec/unrestrictedFsMountNamespaces/-\",\"value\":\"hpe-storage\"}]' hpecpconfigs/hpecp-global-config -n hpecp Caution In theory you may use any Namespace name desired. This might change in a future release and it's encouraged to use \"hpe-storage\" for compatibility with upcoming releases of HPE Ezmeral Runtime Enterprise. By not performing this configuration change, the following events will be seen on the CSI controller ReplicaSet or CSI node DaemonSet trying to schedule Pods . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 2m4s (x17 over 7m32s) replicaset-controller Error creating: admission webhook \"soft-validate.hpecp.hpe.com\" denied the request: Hostpath (\"/\") referenced in volume is not valid for this namespace because of FS Mount protections.","title":"Version 5.3.0"},{"location":"ezmeral/install.html#version_520_or_earlier","text":"Early versions of HPE Ezmeral Runtime Enterprise (HPE Container Platform, HPE Ezmeral Container Platform) contained a checkbox to deploy the HPE CSI Driver for Kubernetes. This method is not supported. Make sure clusters are deployed without the checkbox ticked. Continue with Installation .","title":"Version 5.2.0 or earlier"},{"location":"ezmeral/install.html#installation","text":"Any method to install the HPE CSI Driver for Kubernetes on an HPE Ezmeral Runtime Enterprise managed Kubernetes cluster is supported. Helm is strongly recommended. Make sure to deploy the CSI driver to the \"hpe-storage\" Namespace for future compatibility. HPE CSI Driver for Kubernetes Helm chart on Artifact Hub (recommended) HPE CSI Operator for Kubernetes on OperatorHub.io Advanced Install using YAML manifests Important In some deployments of Ezmeral the kubelet root has been relocated, in those circumstances you'll see errors similar to: Error: command mount failed with rc=32 err=mount: /dev/mapper/mpathh is already mounted or /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-uuid busy /dev/mapper/mpathh is already mounted on /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-uuid . In this case it's recommended to install the CSI driver using Helm with the --set kubeletRootDir=/var/lib/docker/kubelet parameter.","title":"Installation"},{"location":"flexvolume_driver/index.html","text":"Expired content The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within. Legacy FlexVolume drivers \u00b6 Container Provider-based: HPE Nimble Storage and HPE Cloud Volumes Ansible installer for HPE 3PAR and Primera","title":"Index"},{"location":"flexvolume_driver/index.html#legacy_flexvolume_drivers","text":"Container Provider-based: HPE Nimble Storage and HPE Cloud Volumes Ansible installer for HPE 3PAR and Primera","title":"Legacy FlexVolume drivers"},{"location":"flexvolume_driver/container_provider/index.html","text":"Expired content The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within. Overview \u00b6 The HPE Volume Driver for Kubernetes FlexVolume Plugin leverages HPE Nimble Storage or HPE Cloud Volumes to provide scalable and persistent storage for stateful applications. Important Using HPE Nimble Storage with Kubernetes 1.13 and newer, please use the HPE CSI Driver for Kubernetes . Source code and developer documentation is available in the hpe-storage/flexvolume-driver GitHub repo. Overview Platform requirements HPE Nimble Storage Platform Requirements HPE Cloud Volumes Platform Requirements Deploying to Kubernetes Step 1: Create a secret HPE Nimble Storage HPE Cloud Volumes Step 2. Create a ConfigMap HPE Nimble Storage HPE Cloud Volumes Step 3. Deploy the FlexVolume driver and dynamic provisioner HPE Nimble Storage HPE Cloud Volumes Using Sample StorageClass Test and verify volume provisioning Use case specific examples Data protection Clone and throttle for devs Clone a non-containerized volume Import (cutover) a volume Using overrides Creating clones of PVCs StorageClass parameters HPE Nimble Storage StorageClass parameters Common parameters for Provisioning and Cloning Provisioning parameters Cloning parameters Import parameters HPE Cloud Volumes StorageClass parameters Common parameters for Provisioning and Cloning Provisioning parameters Cloning parameters Import parameters Diagnostics Troubleshooting FlexVolume driver Locations Override defaults Connectivity FlexVolume and dynamic provisioner driver logs Log Collector Advanced Configuration Set defaults at the compute node level Global options Common Platform requirements \u00b6 The FlexVolume driver supports multiple backends that are based on a \"container provider\" architecture. Currently, Nimble and Cloud Volumes are supported. HPE Nimble Storage Platform Requirements \u00b6 Driver HPE Nimble Storage Version Release Notes Blog v3.0.0 5.0.8.x and 5.1.3.x onwards v3.0.0 HPE Storage Tech Insiders v3.1.0 5.0.8.x and 5.1.3.x onwards v3.1.0 OpenShift Container Platform 3.9, 3.10 and 3.11. Kubernetes 1.10 and above. Redhat/CentOS 7.5+ Ubuntu 16.04/18.04 LTS Note: Synchronous replication (Peer Persistence) is not supported by the HPE Volume Driver for Kubernetes FlexVolume Plugin. HPE Cloud Volumes Platform Requirements \u00b6 Driver Release Notes Blog v3.1.0 v3.1.0 Using HPE Cloud Volumes with Amazon EKS Amazon EKS 1.12/1.13 Microsoft Azure AKS 1.12/1.13 US regions only Important HPE Cloud Volumes was introduced in HPE CSI Driver for Kubernetes v1.5.0. Make sure to check if your cloud is supported by the CSI driver first. Deploying to Kubernetes \u00b6 The recommended way to deploy and manage the HPE Volume Driver for Kubernetes FlexVolume Plugin is to use Helm. Please see the co-deployments repository for further information. Use the following steps for a manual installation. Step 1: Create a secret \u00b6 HPE Nimble Storage \u00b6 Replace the password string ( YWRtaW4= ) below with a base64 encoded version of your password and replace the backend with your array IP address and save it as hpe-secret.yaml . apiVersion: v1 kind: Secret metadata: name: hpe-secret namespace: kube-system stringData: backend: 192.168.1.1 username: admin protocol: \"iscsi\" data: # echo -n \"admin\" | base64 password: YWRtaW4= HPE Cloud Volumes \u00b6 Replace the username and password strings ( YWRtaW4= ) with a base64 encoded version of your HPE Cloud Volumes \"access_key\" and \"access_secret\". Also, replace the backend with HPE Cloud Volumes portal fully qualified domain name (FQDN) and save it as hpe-secret.yaml . apiVersion: v1 kind: Secret metadata: name: hpe-secret namespace: kube-system stringData: backend: cloudvolumes.hpe.com protocol: \"iscsi\" serviceName: cv-cp-svc servicePort: \"8080\" data: # echo -n \"\" | base64 username: YWRtaW4= # echo -n \"\" | base64 password: YWRtaW4= Create the secret: kubectl create -f hpe-secret.yaml secret \"hpe-secret\" created You should now see the HPE secret in the kube-system namespace. kubectl get secret/hpe-secret -n kube-system NAME TYPE DATA AGE hpe-secret Opaque 5 3s Step 2. Create a ConfigMap \u00b6 The ConfigMap is used to set and tweak defaults for both the FlexVolume driver and Dynamic Provisioner. HPE Nimble Storage \u00b6 Edit the below default parameters as required for FlexVolume driver and save it as hpe-config.yaml . kind: ConfigMap apiVersion: v1 metadata: name: hpe-config namespace: kube-system data: volume-driver.json: |- { \"global\": {}, \"defaults\": { \"limitIOPS\":\"-1\", \"limitMBPS\":\"-1\", \"perfPolicy\": \"Other\" }, \"overrides\":{} } Tip Please see Advanced for more volume-driver.json configuration options. HPE Cloud Volumes \u00b6 Edit the below parameters as required with your public cloud info and save it as hpe-config.yaml . kind: ConfigMap apiVersion: v1 metadata: name: hpe-config namespace: kube-system data: volume-driver.json: |- { \"global\": { \"snapPrefix\": \"BaseFor\", \"initiators\": [\"eth0\"], \"automatedConnection\": true, \"existingCloudSubnet\": \"10.1.0.0/24\", \"region\": \"us-east-1\", \"privateCloud\": \"vpc-data\", \"cloudComputeProvider\": \"Amazon AWS\" }, \"defaults\": { \"limitIOPS\": 1000, \"fsOwner\": \"0:0\", \"fsMode\": \"600\", \"description\": \"Volume provisioned by the HPE Volume Driver for Kubernetes FlexVolume Plugin\", \"perfPolicy\": \"Other\", \"protectionTemplate\": \"twicedaily:4\", \"encryption\": true, \"volumeType\": \"PF\", \"destroyOnRm\": true }, \"overrides\": { } } Create the ConfigMap : kubectl create -f hpe-config.yaml configmap/hpe-config created Step 3. Deploy the FlexVolume driver and dynamic provisioner \u00b6 Deploy the driver as a DaemonSet and the dynamic provisioner as a Deployment . HPE Nimble Storage \u00b6 Version 3.0.0: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-nimble-storage/hpe-flexvolume-driver-v3.0.0.yaml Version 3.1.0: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-nimble-storage/hpe-flexvolume-driver-v3.1.0.yaml HPE Cloud Volumes \u00b6 Container-Provider Service: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-cp-v3.1.0.yaml The FlexVolume driver have different declarations depending on the Kubernetes distribution. Amazon EKS: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-aws-flexvolume-driver-v3.1.0.yaml Microsoft Azure AKS: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-azure-flexvolume-driver-v3.1.0.yaml Generic: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-flexvolume-driver-v3.1.0.yaml Note The declarations for HPE Volume Driver for Kubernetes FlexVolume Plugin can be found in the co-deployments repository. Check to see all hpe-flexvolume-driver Pods (one per compute node) and the hpe-dynamic-provisioner Pod are running. kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE hpe-flexvolume-driver-2rdt4 1/1 Running 0 45s hpe-flexvolume-driver-md562 1/1 Running 0 44s hpe-flexvolume-driver-x4k96 1/1 Running 0 44s hpe-dynamic-provisioner-59f9d495d4-hxh29 1/1 Running 0 24s For HPE Cloud Volumes, check that hpe-cv-cp pod is running as well. kubectl get pods -n kube-system -l=app=cv-cp NAME READY STATUS RESTARTS AGE hpe-cv-cp-2rdt4 1/1 Running 0 45s Using \u00b6 Get started using the FlexVolume driver by setting up StorageClass , PVC API objects. See Using for examples. These instructions are provided as an example on how to use the HPE Volume Driver for Kubernetes FlexVolume Plugin with a HPE Nimble Storage Array. The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- < paste the YAML > ^D (CTRL + D) Tip Some of the examples supported by the HPE Volume Driver for Kubernetes FlexVolume Plugin are available for HPE Nimble Storage or HPE Cloud Volumes in the GitHub repo. To get started, create a StorageClass API object referencing the hpe-secret and defining additional (optional) StorageClass parameters: Sample StorageClass \u00b6 Sample storage classes can be found for HPE Nimble Storage and HPE Cloud Volumes . Hint See StorageClass parameters for HPE Nimble Storage and HPE Clound Volumes for a comprehensive overview. Test and verify volume provisioning \u00b6 Create a StorageClass with volume parameters as required. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-nimble provisioner: hpe.com/nimble parameters: description: \"Volume from HPE FlexVolume driver\" perfPolicy: \"Other Workloads\" limitIOPS: \"76800\" Create a PersistentVolumeClaim . This makes sure a volume is created and provisioned on your behalf: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nimble spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: sc-nimble Check that a new PersistentVolume is created based on your claim: kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE sc-nimble-13336da3-7ca3-11e9-826c-00505693581f 10Gi RWO Delete Bound default/pvc-nimble sc-nimble 3s The above output means that the FlexVolume driver successfully provisioned a new volume and bound to the requesting PVC to a new PV . The volume is not attached to any node yet. It will only be attached to a node if a workload is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified container: kind: Pod apiVersion: v1 metadata: name: pod-nimble spec: containers: - name: pod-nimble-con-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data - name: pod-nimble-cont-2 image: debian command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data volumes: - name: export1 persistentVolumeClaim: claimName: pvc-nimble Check if the pod is running successfully: kubectl get pod pod-nimble NAME READY STATUS RESTARTS AGE pod-nimble 2/2 Running 0 2m29s Use case specific examples \u00b6 This StorageClass examples help guide combinations of options when provisioning volumes. Data protection \u00b6 This StorageClass creates thinly provisioned volumes with deduplication turned on. It will also apply the Performance Policy \"SQL Server\" along with a Protection Template. The Protection Template needs to be defined on the array. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: oltp-prod provisioner: hpe.com/nimble parameters: thick: \"false\" dedupe: \"true\" perfPolicy: \"SQL Server\" protectionTemplate: \"Retain-48Hourly-30Daily-52Weekly\" Clone and throttle for devs \u00b6 This StorageClass will create clones of a \"production\" volume and throttle the performance of each clone to 1000 IOPS. When the PVC is deleted, it will be permanently deleted from the backend array. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: oltp-dev-clone-of-prod provisioner: hpe.com/nimble parameters: limitIOPS: \"1000\" cloneOf: \"oltp-prod-1adee106-110b-11e8-ac84-00505696c45f\" destroyOnRm: \"true\" Clone a non-containerized volume \u00b6 This StorageClass will clone a standard backend volume (without container metadata on it) from a particular pool on the backend. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: import-clone-legacy-prod rovisioner: hpe.com/nimble parameters: pool: \"flash\" importVolAsClone: \"production-db-vol\" destroyOnRm: \"true\" Import (cutover) a volume \u00b6 This StorageClass will import an existing Nimble volume to Kubernetes. The source volume needs to be offline for the import to succeed. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: import-clone-legacy-prod provisioner: hpe.com/nimble parameters: pool: \"flash\" importVol: \"production-db-vol\" Using overrides \u00b6 The HPE Dynamic Provisioner for Kubernetes understands a set of annotation keys a user can set on a PVC . If the corresponding keys exists in the list of the allowOverrides key in the StorageClass , the end-user can tweak certain aspects of the provisioning workflow. This opens up for very advanced data services. StorageClass object: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: my-sc provisioner: hpe.com/nimble parameters: description: \"Volume provisioned by StorageClass my-sc\" dedupe: \"false\" destroyOnRm: \"true\" perfPolicy: \"Windows File Server\" folder: \"myfolder\" allowOverrides: snapshot,limitIOPS,perfPolicy PersistentVolumeClaim object: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc annotations: hpe.com/description: \"This is my custom description\" hpe.com/limitIOPS: \"8000\" hpe.com/perfPolicy: \"SQL Server\" spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: my-sc This will create a PV of 8000 IOPS with the Performance Policy of \"SQL Server\" and a custom volume description. Creating clones of PVCs \u00b6 Using a StorageClass to clone a PV is practical when there's needs to clone across namespaces (for example from prod to test or stage). If a user wants to clone any arbitrary volume, it becomes a bit tedious to create a StorageClass for each clone. The annotation hpe.com/CloneOfPVC allows a user to clone any PVC within a namespace. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-clone annotations: hpe.com/cloneOfPVC: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: my-sc StorageClass parameters \u00b6 This section highlights all the available StorageClass parameters that are supported. HPE Nimble Storage StorageClass parameters \u00b6 A StorageClass is used to provision or clone an HPE Nimble Storage-backed persistent volume. It can also be used to import an existing HPE Nimble Storage volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. A sample StorageClass is provided. Note These are optional parameters. Common parameters for Provisioning and Cloning \u00b6 These parameters are mutable betweeen a parent volume and creating a clone from a snapshot. Parameter String Description nameSuffix Text Suffix to append to Nimble volumes. Defaults to .docker destroyOnRm Boolean Indicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted. limitIOPS Integer The IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default). limitMBPS Integer The MB/s throughput limit for the volume. description Text Text to be added to the volume's description on the Nimble array. perfPolicy Text The name of the performance policy to assign to the volume. Default example performance policies include \"Backup Repository\", \"Exchange 2003 data store\", \"Exchange 2007 data store\", \"Exchange 2010 data store\", \"Exchange log\", \"Oracle OLTP\", \"Other Workloads\", \"SharePoint\", \"SQL Server\", \"SQL Server 2012\", \"SQL Server Logs\". protectionTemplate Text The name of the protection template to assign to the volume. Default examples of protection templates include \"Retain-30Daily\", \"Retain-48Hourly-30aily-52Weekly\", and \"Retain-90Daily\". folder Text The name of the Nimble folder in which to place the volume. thick Boolean Indicates that the volume should be thick provisioned. dedupeEnabled Boolean Indicates that the volume should enable deduplication. syncOnUnmount Boolean Indicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node. Note Performance Policies, Folders and Protection Templates are Nimble specific constructs that can be created on the Nimble array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight . Provisioning parameters \u00b6 These parameters are immutable for clones once a volume has been created. Parameter String Description fsOwner userId:groupId The user id and group id that should own the root directory of the filesystem. fsMode Octal digits 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. encryption Boolean Indicates that the volume should be encrypted. pool Text The name of the pool in which to place the volume. Cloning parameters \u00b6 Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Nimble volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the Nimble volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. snapshotPrefix Text A prefix to add to the beginning of the snapshot name. Import parameters \u00b6 Importing volumes to Kubernetes requires the source Nimble volume to be offline. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE Volume Driver for Kubernetes FlexVolume Plugin. Parameter String Description importVol Text The name of the Nimble volume to import. snapshot Text The name of the Nimble snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored. restore Boolean Restores the volume to the last snapshot taken on the volume. takeover Boolean Indicates the current group will takeover ownership of the Nimble volume and volume collection. This should be performed against a downstream replica. reverseRepl Boolean Reverses the replication direction so that writes to the Nimble volume are replicated back to the group where it was replicated from. forceImport Boolean Forces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead. Note HPE Nimble Docker Volume workflows works with a 1-1 mapping between volume and volume collection. HPE Cloud Volumes StorageClass parameters \u00b6 A StorageClass is used to provision or clone an HPE Cloud Volumes-backed persistent volume. It can also be used to import an existing HPE Cloud Volumes volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. A sample StorageClass is provided. Note These are optional parameters. Common parameters for Provisioning and Cloning \u00b6 These parameters are mutable betweeen a parent volume and creating a clone from a snapshot. Parameter String Description nameSuffix Text Suffix to append to Cloud Volumes. destroyOnRm Boolean Indicates the backing Cloud volume (including snapshots) should be destroyed when the PVC is deleted. limitIOPS Integer The IOPS limit of the volume. The IOPS limit should be in the range 300 to 50000. perfPolicy Text The name of the performance policy to assign to the volume. Default example performance policies include \"Other, Exchange, Oracle, SharePoint, SQL, Windows File Server\". protectionTemplate Text The name of the protection template to assign to the volume. Default examples of protection templates include \"daily:3, daily:7, daily:14, hourly:6, hourly:12, hourly:24, twicedaily:4, twicedaily:8, twicedaily:14, weekly:2, weekly:4, weekly:8, monthly:3, monthly:6, monthly:12 or none\". volumeType Text Cloud Volume type. Supported types are PF and GPF. Provisioning parameters \u00b6 These parameters are immutable for clones once a volume has been created. Parameter String Description fsOwner userId:groupId The user id and group id that should own the root directory of the filesystem. fsMode Octal digits 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. encryption Boolean Indicates that the volume should be encrypted. Cloning parameters \u00b6 Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Cloud volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the Cloud Volume volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. snapshotPrefix Text A prefix to add to the beginning of the snapshot name. replStore Text Replication store name. Should be used with importVolAsClone parameter to clone a replica volume Import parameters \u00b6 Importing volumes to Kubernetes requires the source Cloud volume to be not attached to any nodes. All previous Access Control Records will be stripped from the volume when put under control of the HPE Volume Driver for Kubernetes FlexVolume Plugin. Parameter String Description importVol Text The name of the Cloud volume to import. forceImport Boolean Forces the import of a volume that is provisioned by another K8s cluster but not attached to any nodes. Diagnostics \u00b6 This section outlines a few troubleshooting steps for the HPE Volume Driver for Kubernetes Plugin. This product is supported by HPE, please consult with your support organization (Nimble, Cloud Volumes etc) prior attempting any configuration changes. Troubleshooting FlexVolume driver \u00b6 The FlexVolume driver is a binary executed by the kubelet to perform mount/unmount/attach/detach operations as workloads request storage resources. The binary relies on communicating with a socket on the host where the volume plugin responsible for the MUAD operations perform control-plane or data-plane operations against the backend system hosting the actual volumes. Locations \u00b6 The driver has a configuration file where certain defaults can be tweaked to accommodate a certain behavior. Under normal circumstances, this file does not need any tweaking. The name and the location of the binary varies based on Kubernetes distribution (the default 'exec' path) and what backend driver is being used. In a typical scenario, using Nimble, this is expected: Binary: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble Config file: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble.json Override defaults \u00b6 By default, it contains only the path to the socket file for the volume plugin: { \"dockerVolumePluginSocketPath\": \"/etc/hpe-storage/nimble.sock\" } Valid options for the FlexVolume driver can be inspected by executing the binary on the host with the config argument: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble config Error processing option 'logFilePath' - key:logFilePath not found Error processing option 'logDebug' - key:logDebug not found Error processing option 'supportsCapabilities' - key:supportsCapabilities not found Error processing option 'stripK8sFromOptions' - key:stripK8sFromOptions not found Error processing option 'createVolumes' - key:createVolumes not found Error processing option 'listOfStorageResourceOptions' - key:listOfStorageResourceOptions not found Error processing option 'factorForConversion' - key:factorForConversion not found Error processing option 'enable1.6' - key:enable1.6 not found Driver=nimble Version=v2.5.1-50fbff2aa14a693a9a18adafb834da33b9e7cc89 Current Config: dockerVolumePluginSocketPath = /etc/hpe-storage/nimble.sock stripK8sFromOptions = true logFilePath = /var/log/dory.log logDebug = false createVolumes = false enable1.6 = false factorForConversion = 1073741824 listOfStorageResourceOptions = [size sizeInGiB] supportsCapabilities = true An example tweak could be to enable debug logging and enable support for Kubernetes 1.6 (which we don't officially support). The config file would then end up like this: { \"dockerVolumePluginSocketPath\": \"/etc/hpe-storage/nimble.sock\", \"logDebug\": true, \"enable1.6\": true } Execute the binary again ( nimble config ) to ensure the parameters and config file gets parsed correctly. Since the config file is read on each FlexVolume operation, no restart of anything is needed. See Advanced for more parameters for the driver.json file. Connectivity \u00b6 To verify the FlexVolume binary can actually communicate with the backend volume plugin, issue a faux mount request: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble mount no/op '{\"name\":\"myvol1\"}' If the FlexVolume driver can successfully communicate with the volume plugin socket: {\"status\":\"Failure\",\"message\":\"configured to NOT create volumes\"} In the case of any other output, check if the backend volume plugin is alive with curl : curl --unix-socket /etc/hpe-storage/nimble.sock -d '{}' http://localhost/VolumeDriver.Capabilities It should output: {\"capabilities\":{\"scope\":\"global\"},\"Err\":\"\"} FlexVolume and dynamic provisioner driver logs \u00b6 Log files associated with the HPE Volume Driver for Kubernetes FlexVolume Plugin logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution. Some of the logs on the host are persisted which follow standard logrotate policies. FlexVolume driver logs: kubectl logs -f daemonset.apps/hpe-flexvolume-driver -n kube-system The logs are persisted at /var/log/hpe-docker-plugin.log and /var/log/dory.log Dynamic Provisioner logs: kubectl logs -f deployment.apps/hpe-dynamic-provisioner -n kube-system The logs are persisted at /var/log/hpe-dynamic-provisioner.log Log Collector \u00b6 Log collector script hpe-logcollector.sh can be used to collect diagnostic logs using kubectl Download the script as follows: curl -O https://raw.githubusercontent.com/hpe-storage/flexvolume-driver/master/hpe-logcollector.sh chmod 555 hpe-logcollector.sh Usage: ./hpe-logcollector.sh -h Diagnostic Script to collect HPE Storage logs using kubectl Usage: hpe-logcollector.sh [-h|--help][--node-name NODE_NAME][-n|--namespace NAMESPACE][-a|--all] Where -h|--help Print the Usage text --node-name NODE_NAME where NODE_NAME is kubernetes Node Name needed to collect the hpe diagnostic logs of the Node -n|--namespace NAMESPACE where NAMESPACE is namespace of the pod deployment. default is kube-system -a|--all collect diagnostic logs of all the nodes.If nothing is specified logs would be collected from all the nodes Advanced Configuration \u00b6 This section describes some of the advanced configuration steps available to tweak behavior of the HPE Volume Driver for Kubernetes FlexVolume Plugin. Set defaults at the compute node level \u00b6 During normal operations, defaults are set in either the ConfigMap or in a StorageClass itself. The picking order is: StorageClass ConfigMap driver.json Please see Diagnostics to locate the driver for your particular environment. Add this object to the configuration file, nimble.json , for example: { \"defaultOptions\": [{\"option1\": \"value1\"}, {\"option2\": \"value2\"}] } Where option1 and option2 are valid backend volume plugin create options. Note It's highly recommended to control defaults with StorageClass API objects or the ConfigMap . Global options \u00b6 Each driver supports setting certain \"global\" options in the ConfigMap . Some options are common, some are driver specific. Common \u00b6 Parameter String Description volumeDir Text Root directory on the host to mount the volumes. This parameter needs correlation with the podsmountdir path in the volumeMounts stanzas of the deployment. logDebug Boolean Turn on debug logging, set to false by default.","title":"Index"},{"location":"flexvolume_driver/container_provider/index.html#overview","text":"The HPE Volume Driver for Kubernetes FlexVolume Plugin leverages HPE Nimble Storage or HPE Cloud Volumes to provide scalable and persistent storage for stateful applications. Important Using HPE Nimble Storage with Kubernetes 1.13 and newer, please use the HPE CSI Driver for Kubernetes . Source code and developer documentation is available in the hpe-storage/flexvolume-driver GitHub repo. Overview Platform requirements HPE Nimble Storage Platform Requirements HPE Cloud Volumes Platform Requirements Deploying to Kubernetes Step 1: Create a secret HPE Nimble Storage HPE Cloud Volumes Step 2. Create a ConfigMap HPE Nimble Storage HPE Cloud Volumes Step 3. Deploy the FlexVolume driver and dynamic provisioner HPE Nimble Storage HPE Cloud Volumes Using Sample StorageClass Test and verify volume provisioning Use case specific examples Data protection Clone and throttle for devs Clone a non-containerized volume Import (cutover) a volume Using overrides Creating clones of PVCs StorageClass parameters HPE Nimble Storage StorageClass parameters Common parameters for Provisioning and Cloning Provisioning parameters Cloning parameters Import parameters HPE Cloud Volumes StorageClass parameters Common parameters for Provisioning and Cloning Provisioning parameters Cloning parameters Import parameters Diagnostics Troubleshooting FlexVolume driver Locations Override defaults Connectivity FlexVolume and dynamic provisioner driver logs Log Collector Advanced Configuration Set defaults at the compute node level Global options Common","title":"Overview"},{"location":"flexvolume_driver/container_provider/index.html#platform_requirements","text":"The FlexVolume driver supports multiple backends that are based on a \"container provider\" architecture. Currently, Nimble and Cloud Volumes are supported.","title":"Platform requirements"},{"location":"flexvolume_driver/container_provider/index.html#hpe_nimble_storage_platform_requirements","text":"Driver HPE Nimble Storage Version Release Notes Blog v3.0.0 5.0.8.x and 5.1.3.x onwards v3.0.0 HPE Storage Tech Insiders v3.1.0 5.0.8.x and 5.1.3.x onwards v3.1.0 OpenShift Container Platform 3.9, 3.10 and 3.11. Kubernetes 1.10 and above. Redhat/CentOS 7.5+ Ubuntu 16.04/18.04 LTS Note: Synchronous replication (Peer Persistence) is not supported by the HPE Volume Driver for Kubernetes FlexVolume Plugin.","title":"HPE Nimble Storage Platform Requirements"},{"location":"flexvolume_driver/container_provider/index.html#hpe_cloud_volumes_platform_requirements","text":"Driver Release Notes Blog v3.1.0 v3.1.0 Using HPE Cloud Volumes with Amazon EKS Amazon EKS 1.12/1.13 Microsoft Azure AKS 1.12/1.13 US regions only Important HPE Cloud Volumes was introduced in HPE CSI Driver for Kubernetes v1.5.0. Make sure to check if your cloud is supported by the CSI driver first.","title":"HPE Cloud Volumes Platform Requirements"},{"location":"flexvolume_driver/container_provider/index.html#deploying_to_kubernetes","text":"The recommended way to deploy and manage the HPE Volume Driver for Kubernetes FlexVolume Plugin is to use Helm. Please see the co-deployments repository for further information. Use the following steps for a manual installation.","title":"Deploying to Kubernetes"},{"location":"flexvolume_driver/container_provider/index.html#step_1_create_a_secret","text":"","title":"Step 1: Create a secret"},{"location":"flexvolume_driver/container_provider/index.html#hpe_nimble_storage","text":"Replace the password string ( YWRtaW4= ) below with a base64 encoded version of your password and replace the backend with your array IP address and save it as hpe-secret.yaml . apiVersion: v1 kind: Secret metadata: name: hpe-secret namespace: kube-system stringData: backend: 192.168.1.1 username: admin protocol: \"iscsi\" data: # echo -n \"admin\" | base64 password: YWRtaW4=","title":"HPE Nimble Storage"},{"location":"flexvolume_driver/container_provider/index.html#hpe_cloud_volumes","text":"Replace the username and password strings ( YWRtaW4= ) with a base64 encoded version of your HPE Cloud Volumes \"access_key\" and \"access_secret\". Also, replace the backend with HPE Cloud Volumes portal fully qualified domain name (FQDN) and save it as hpe-secret.yaml . apiVersion: v1 kind: Secret metadata: name: hpe-secret namespace: kube-system stringData: backend: cloudvolumes.hpe.com protocol: \"iscsi\" serviceName: cv-cp-svc servicePort: \"8080\" data: # echo -n \"\" | base64 username: YWRtaW4= # echo -n \"\" | base64 password: YWRtaW4= Create the secret: kubectl create -f hpe-secret.yaml secret \"hpe-secret\" created You should now see the HPE secret in the kube-system namespace. kubectl get secret/hpe-secret -n kube-system NAME TYPE DATA AGE hpe-secret Opaque 5 3s","title":"HPE Cloud Volumes"},{"location":"flexvolume_driver/container_provider/index.html#step_2_create_a_configmap","text":"The ConfigMap is used to set and tweak defaults for both the FlexVolume driver and Dynamic Provisioner.","title":"Step 2. Create a ConfigMap"},{"location":"flexvolume_driver/container_provider/index.html#hpe_nimble_storage_1","text":"Edit the below default parameters as required for FlexVolume driver and save it as hpe-config.yaml . kind: ConfigMap apiVersion: v1 metadata: name: hpe-config namespace: kube-system data: volume-driver.json: |- { \"global\": {}, \"defaults\": { \"limitIOPS\":\"-1\", \"limitMBPS\":\"-1\", \"perfPolicy\": \"Other\" }, \"overrides\":{} } Tip Please see Advanced for more volume-driver.json configuration options.","title":"HPE Nimble Storage"},{"location":"flexvolume_driver/container_provider/index.html#hpe_cloud_volumes_1","text":"Edit the below parameters as required with your public cloud info and save it as hpe-config.yaml . kind: ConfigMap apiVersion: v1 metadata: name: hpe-config namespace: kube-system data: volume-driver.json: |- { \"global\": { \"snapPrefix\": \"BaseFor\", \"initiators\": [\"eth0\"], \"automatedConnection\": true, \"existingCloudSubnet\": \"10.1.0.0/24\", \"region\": \"us-east-1\", \"privateCloud\": \"vpc-data\", \"cloudComputeProvider\": \"Amazon AWS\" }, \"defaults\": { \"limitIOPS\": 1000, \"fsOwner\": \"0:0\", \"fsMode\": \"600\", \"description\": \"Volume provisioned by the HPE Volume Driver for Kubernetes FlexVolume Plugin\", \"perfPolicy\": \"Other\", \"protectionTemplate\": \"twicedaily:4\", \"encryption\": true, \"volumeType\": \"PF\", \"destroyOnRm\": true }, \"overrides\": { } } Create the ConfigMap : kubectl create -f hpe-config.yaml configmap/hpe-config created","title":"HPE Cloud Volumes"},{"location":"flexvolume_driver/container_provider/index.html#step_3_deploy_the_flexvolume_driver_and_dynamic_provisioner","text":"Deploy the driver as a DaemonSet and the dynamic provisioner as a Deployment .","title":"Step 3. Deploy the FlexVolume driver and dynamic provisioner"},{"location":"flexvolume_driver/container_provider/index.html#hpe_nimble_storage_2","text":"Version 3.0.0: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-nimble-storage/hpe-flexvolume-driver-v3.0.0.yaml Version 3.1.0: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-nimble-storage/hpe-flexvolume-driver-v3.1.0.yaml","title":"HPE Nimble Storage"},{"location":"flexvolume_driver/container_provider/index.html#hpe_cloud_volumes_2","text":"Container-Provider Service: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-cp-v3.1.0.yaml The FlexVolume driver have different declarations depending on the Kubernetes distribution. Amazon EKS: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-aws-flexvolume-driver-v3.1.0.yaml Microsoft Azure AKS: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-azure-flexvolume-driver-v3.1.0.yaml Generic: kubectl create -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/flexvolume-driver/hpe-cloud-volumes/hpecv-flexvolume-driver-v3.1.0.yaml Note The declarations for HPE Volume Driver for Kubernetes FlexVolume Plugin can be found in the co-deployments repository. Check to see all hpe-flexvolume-driver Pods (one per compute node) and the hpe-dynamic-provisioner Pod are running. kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE hpe-flexvolume-driver-2rdt4 1/1 Running 0 45s hpe-flexvolume-driver-md562 1/1 Running 0 44s hpe-flexvolume-driver-x4k96 1/1 Running 0 44s hpe-dynamic-provisioner-59f9d495d4-hxh29 1/1 Running 0 24s For HPE Cloud Volumes, check that hpe-cv-cp pod is running as well. kubectl get pods -n kube-system -l=app=cv-cp NAME READY STATUS RESTARTS AGE hpe-cv-cp-2rdt4 1/1 Running 0 45s","title":"HPE Cloud Volumes"},{"location":"flexvolume_driver/container_provider/index.html#using","text":"Get started using the FlexVolume driver by setting up StorageClass , PVC API objects. See Using for examples. These instructions are provided as an example on how to use the HPE Volume Driver for Kubernetes FlexVolume Plugin with a HPE Nimble Storage Array. The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- < paste the YAML > ^D (CTRL + D) Tip Some of the examples supported by the HPE Volume Driver for Kubernetes FlexVolume Plugin are available for HPE Nimble Storage or HPE Cloud Volumes in the GitHub repo. To get started, create a StorageClass API object referencing the hpe-secret and defining additional (optional) StorageClass parameters:","title":"Using"},{"location":"flexvolume_driver/container_provider/index.html#sample_storageclass","text":"Sample storage classes can be found for HPE Nimble Storage and HPE Cloud Volumes . Hint See StorageClass parameters for HPE Nimble Storage and HPE Clound Volumes for a comprehensive overview.","title":"Sample StorageClass"},{"location":"flexvolume_driver/container_provider/index.html#test_and_verify_volume_provisioning","text":"Create a StorageClass with volume parameters as required. apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-nimble provisioner: hpe.com/nimble parameters: description: \"Volume from HPE FlexVolume driver\" perfPolicy: \"Other Workloads\" limitIOPS: \"76800\" Create a PersistentVolumeClaim . This makes sure a volume is created and provisioned on your behalf: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nimble spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: sc-nimble Check that a new PersistentVolume is created based on your claim: kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE sc-nimble-13336da3-7ca3-11e9-826c-00505693581f 10Gi RWO Delete Bound default/pvc-nimble sc-nimble 3s The above output means that the FlexVolume driver successfully provisioned a new volume and bound to the requesting PVC to a new PV . The volume is not attached to any node yet. It will only be attached to a node if a workload is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified container: kind: Pod apiVersion: v1 metadata: name: pod-nimble spec: containers: - name: pod-nimble-con-1 image: nginx command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data - name: pod-nimble-cont-2 image: debian command: [\"bin/sh\"] args: [\"-c\", \"while true; do date >> /data/mydata.txt; sleep 1; done\"] volumeMounts: - name: export1 mountPath: /data volumes: - name: export1 persistentVolumeClaim: claimName: pvc-nimble Check if the pod is running successfully: kubectl get pod pod-nimble NAME READY STATUS RESTARTS AGE pod-nimble 2/2 Running 0 2m29s","title":"Test and verify volume provisioning"},{"location":"flexvolume_driver/container_provider/index.html#use_case_specific_examples","text":"This StorageClass examples help guide combinations of options when provisioning volumes.","title":"Use case specific examples"},{"location":"flexvolume_driver/container_provider/index.html#data_protection","text":"This StorageClass creates thinly provisioned volumes with deduplication turned on. It will also apply the Performance Policy \"SQL Server\" along with a Protection Template. The Protection Template needs to be defined on the array. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: oltp-prod provisioner: hpe.com/nimble parameters: thick: \"false\" dedupe: \"true\" perfPolicy: \"SQL Server\" protectionTemplate: \"Retain-48Hourly-30Daily-52Weekly\"","title":"Data protection"},{"location":"flexvolume_driver/container_provider/index.html#clone_and_throttle_for_devs","text":"This StorageClass will create clones of a \"production\" volume and throttle the performance of each clone to 1000 IOPS. When the PVC is deleted, it will be permanently deleted from the backend array. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: oltp-dev-clone-of-prod provisioner: hpe.com/nimble parameters: limitIOPS: \"1000\" cloneOf: \"oltp-prod-1adee106-110b-11e8-ac84-00505696c45f\" destroyOnRm: \"true\"","title":"Clone and throttle for devs"},{"location":"flexvolume_driver/container_provider/index.html#clone_a_non-containerized_volume","text":"This StorageClass will clone a standard backend volume (without container metadata on it) from a particular pool on the backend. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: import-clone-legacy-prod rovisioner: hpe.com/nimble parameters: pool: \"flash\" importVolAsClone: \"production-db-vol\" destroyOnRm: \"true\"","title":"Clone a non-containerized volume"},{"location":"flexvolume_driver/container_provider/index.html#import_cutover_a_volume","text":"This StorageClass will import an existing Nimble volume to Kubernetes. The source volume needs to be offline for the import to succeed. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: import-clone-legacy-prod provisioner: hpe.com/nimble parameters: pool: \"flash\" importVol: \"production-db-vol\"","title":"Import (cutover) a volume"},{"location":"flexvolume_driver/container_provider/index.html#using_overrides","text":"The HPE Dynamic Provisioner for Kubernetes understands a set of annotation keys a user can set on a PVC . If the corresponding keys exists in the list of the allowOverrides key in the StorageClass , the end-user can tweak certain aspects of the provisioning workflow. This opens up for very advanced data services. StorageClass object: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: my-sc provisioner: hpe.com/nimble parameters: description: \"Volume provisioned by StorageClass my-sc\" dedupe: \"false\" destroyOnRm: \"true\" perfPolicy: \"Windows File Server\" folder: \"myfolder\" allowOverrides: snapshot,limitIOPS,perfPolicy PersistentVolumeClaim object: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc annotations: hpe.com/description: \"This is my custom description\" hpe.com/limitIOPS: \"8000\" hpe.com/perfPolicy: \"SQL Server\" spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: my-sc This will create a PV of 8000 IOPS with the Performance Policy of \"SQL Server\" and a custom volume description.","title":"Using overrides"},{"location":"flexvolume_driver/container_provider/index.html#creating_clones_of_pvcs","text":"Using a StorageClass to clone a PV is practical when there's needs to clone across namespaces (for example from prod to test or stage). If a user wants to clone any arbitrary volume, it becomes a bit tedious to create a StorageClass for each clone. The annotation hpe.com/CloneOfPVC allows a user to clone any PVC within a namespace. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc-clone annotations: hpe.com/cloneOfPVC: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: my-sc","title":"Creating clones of PVCs"},{"location":"flexvolume_driver/container_provider/index.html#storageclass_parameters","text":"This section highlights all the available StorageClass parameters that are supported.","title":"StorageClass parameters"},{"location":"flexvolume_driver/container_provider/index.html#hpe_nimble_storage_storageclass_parameters","text":"A StorageClass is used to provision or clone an HPE Nimble Storage-backed persistent volume. It can also be used to import an existing HPE Nimble Storage volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. A sample StorageClass is provided. Note These are optional parameters.","title":"HPE Nimble Storage StorageClass parameters"},{"location":"flexvolume_driver/container_provider/index.html#common_parameters_for_provisioning_and_cloning","text":"These parameters are mutable betweeen a parent volume and creating a clone from a snapshot. Parameter String Description nameSuffix Text Suffix to append to Nimble volumes. Defaults to .docker destroyOnRm Boolean Indicates the backing Nimble volume (including snapshots) should be destroyed when the PVC is deleted. limitIOPS Integer The IOPS limit of the volume. The IOPS limit should be in the range 256 to 4294967294, or -1 for unlimited (default). limitMBPS Integer The MB/s throughput limit for the volume. description Text Text to be added to the volume's description on the Nimble array. perfPolicy Text The name of the performance policy to assign to the volume. Default example performance policies include \"Backup Repository\", \"Exchange 2003 data store\", \"Exchange 2007 data store\", \"Exchange 2010 data store\", \"Exchange log\", \"Oracle OLTP\", \"Other Workloads\", \"SharePoint\", \"SQL Server\", \"SQL Server 2012\", \"SQL Server Logs\". protectionTemplate Text The name of the protection template to assign to the volume. Default examples of protection templates include \"Retain-30Daily\", \"Retain-48Hourly-30aily-52Weekly\", and \"Retain-90Daily\". folder Text The name of the Nimble folder in which to place the volume. thick Boolean Indicates that the volume should be thick provisioned. dedupeEnabled Boolean Indicates that the volume should enable deduplication. syncOnUnmount Boolean Indicates that a snapshot of the volume should be synced to the replication partner each time it is detached from a node. Note Performance Policies, Folders and Protection Templates are Nimble specific constructs that can be created on the Nimble array itself to address particular requirements or workloads. Please consult with the storage admin or read the admin guide found on HPE InfoSight .","title":"Common parameters for Provisioning and Cloning"},{"location":"flexvolume_driver/container_provider/index.html#provisioning_parameters","text":"These parameters are immutable for clones once a volume has been created. Parameter String Description fsOwner userId:groupId The user id and group id that should own the root directory of the filesystem. fsMode Octal digits 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. encryption Boolean Indicates that the volume should be encrypted. pool Text The name of the pool in which to place the volume.","title":"Provisioning parameters"},{"location":"flexvolume_driver/container_provider/index.html#cloning_parameters","text":"Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Nimble volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the Nimble volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. snapshotPrefix Text A prefix to add to the beginning of the snapshot name.","title":"Cloning parameters"},{"location":"flexvolume_driver/container_provider/index.html#import_parameters","text":"Importing volumes to Kubernetes requires the source Nimble volume to be offline. All previous Access Control Records and Initiator Groups will be stripped from the volume when put under control of the HPE Volume Driver for Kubernetes FlexVolume Plugin. Parameter String Description importVol Text The name of the Nimble volume to import. snapshot Text The name of the Nimble snapshot to restore the imported volume to after takeover. If not specified, the volume will not be restored. restore Boolean Restores the volume to the last snapshot taken on the volume. takeover Boolean Indicates the current group will takeover ownership of the Nimble volume and volume collection. This should be performed against a downstream replica. reverseRepl Boolean Reverses the replication direction so that writes to the Nimble volume are replicated back to the group where it was replicated from. forceImport Boolean Forces the import of a volume that is not owned by the group and is not part of a volume collection. If the volume is part of a volume collection, use takeover instead. Note HPE Nimble Docker Volume workflows works with a 1-1 mapping between volume and volume collection.","title":"Import parameters"},{"location":"flexvolume_driver/container_provider/index.html#hpe_cloud_volumes_storageclass_parameters","text":"A StorageClass is used to provision or clone an HPE Cloud Volumes-backed persistent volume. It can also be used to import an existing HPE Cloud Volumes volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. A sample StorageClass is provided. Note These are optional parameters.","title":"HPE Cloud Volumes StorageClass parameters"},{"location":"flexvolume_driver/container_provider/index.html#common_parameters_for_provisioning_and_cloning_1","text":"These parameters are mutable betweeen a parent volume and creating a clone from a snapshot. Parameter String Description nameSuffix Text Suffix to append to Cloud Volumes. destroyOnRm Boolean Indicates the backing Cloud volume (including snapshots) should be destroyed when the PVC is deleted. limitIOPS Integer The IOPS limit of the volume. The IOPS limit should be in the range 300 to 50000. perfPolicy Text The name of the performance policy to assign to the volume. Default example performance policies include \"Other, Exchange, Oracle, SharePoint, SQL, Windows File Server\". protectionTemplate Text The name of the protection template to assign to the volume. Default examples of protection templates include \"daily:3, daily:7, daily:14, hourly:6, hourly:12, hourly:24, twicedaily:4, twicedaily:8, twicedaily:14, weekly:2, weekly:4, weekly:8, monthly:3, monthly:6, monthly:12 or none\". volumeType Text Cloud Volume type. Supported types are PF and GPF.","title":"Common parameters for Provisioning and Cloning"},{"location":"flexvolume_driver/container_provider/index.html#provisioning_parameters_1","text":"These parameters are immutable for clones once a volume has been created. Parameter String Description fsOwner userId:groupId The user id and group id that should own the root directory of the filesystem. fsMode Octal digits 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. encryption Boolean Indicates that the volume should be encrypted.","title":"Provisioning parameters"},{"location":"flexvolume_driver/container_provider/index.html#cloning_parameters_1","text":"Cloning supports two modes of cloning. Either use cloneOf and reference a PVC in the current namespace or use importVolAsClone and reference a Cloud volume name to clone and import to Kubernetes. Parameter String Description cloneOf Text The name of the PV to be cloned. cloneOf and importVolAsClone are mutually exclusive. importVolAsClone Text The name of the Cloud Volume volume to clone and import. importVolAsClone and cloneOf are mutually exclusive. snapshot Text The name of the snapshot to base the clone on. This is optional. If not specified, a new snapshot is created. createSnapshot Boolean Indicates that a new snapshot of the volume should be taken matching the name provided in the snapshot parameter. If the snapshot parameter is not specified, a default name will be created. snapshotPrefix Text A prefix to add to the beginning of the snapshot name. replStore Text Replication store name. Should be used with importVolAsClone parameter to clone a replica volume","title":"Cloning parameters"},{"location":"flexvolume_driver/container_provider/index.html#import_parameters_1","text":"Importing volumes to Kubernetes requires the source Cloud volume to be not attached to any nodes. All previous Access Control Records will be stripped from the volume when put under control of the HPE Volume Driver for Kubernetes FlexVolume Plugin. Parameter String Description importVol Text The name of the Cloud volume to import. forceImport Boolean Forces the import of a volume that is provisioned by another K8s cluster but not attached to any nodes.","title":"Import parameters"},{"location":"flexvolume_driver/container_provider/index.html#diagnostics","text":"This section outlines a few troubleshooting steps for the HPE Volume Driver for Kubernetes Plugin. This product is supported by HPE, please consult with your support organization (Nimble, Cloud Volumes etc) prior attempting any configuration changes.","title":"Diagnostics"},{"location":"flexvolume_driver/container_provider/index.html#troubleshooting_flexvolume_driver","text":"The FlexVolume driver is a binary executed by the kubelet to perform mount/unmount/attach/detach operations as workloads request storage resources. The binary relies on communicating with a socket on the host where the volume plugin responsible for the MUAD operations perform control-plane or data-plane operations against the backend system hosting the actual volumes.","title":"Troubleshooting FlexVolume driver"},{"location":"flexvolume_driver/container_provider/index.html#locations","text":"The driver has a configuration file where certain defaults can be tweaked to accommodate a certain behavior. Under normal circumstances, this file does not need any tweaking. The name and the location of the binary varies based on Kubernetes distribution (the default 'exec' path) and what backend driver is being used. In a typical scenario, using Nimble, this is expected: Binary: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble Config file: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble.json","title":"Locations"},{"location":"flexvolume_driver/container_provider/index.html#override_defaults","text":"By default, it contains only the path to the socket file for the volume plugin: { \"dockerVolumePluginSocketPath\": \"/etc/hpe-storage/nimble.sock\" } Valid options for the FlexVolume driver can be inspected by executing the binary on the host with the config argument: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble config Error processing option 'logFilePath' - key:logFilePath not found Error processing option 'logDebug' - key:logDebug not found Error processing option 'supportsCapabilities' - key:supportsCapabilities not found Error processing option 'stripK8sFromOptions' - key:stripK8sFromOptions not found Error processing option 'createVolumes' - key:createVolumes not found Error processing option 'listOfStorageResourceOptions' - key:listOfStorageResourceOptions not found Error processing option 'factorForConversion' - key:factorForConversion not found Error processing option 'enable1.6' - key:enable1.6 not found Driver=nimble Version=v2.5.1-50fbff2aa14a693a9a18adafb834da33b9e7cc89 Current Config: dockerVolumePluginSocketPath = /etc/hpe-storage/nimble.sock stripK8sFromOptions = true logFilePath = /var/log/dory.log logDebug = false createVolumes = false enable1.6 = false factorForConversion = 1073741824 listOfStorageResourceOptions = [size sizeInGiB] supportsCapabilities = true An example tweak could be to enable debug logging and enable support for Kubernetes 1.6 (which we don't officially support). The config file would then end up like this: { \"dockerVolumePluginSocketPath\": \"/etc/hpe-storage/nimble.sock\", \"logDebug\": true, \"enable1.6\": true } Execute the binary again ( nimble config ) to ensure the parameters and config file gets parsed correctly. Since the config file is read on each FlexVolume operation, no restart of anything is needed. See Advanced for more parameters for the driver.json file.","title":"Override defaults"},{"location":"flexvolume_driver/container_provider/index.html#connectivity","text":"To verify the FlexVolume binary can actually communicate with the backend volume plugin, issue a faux mount request: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble mount no/op '{\"name\":\"myvol1\"}' If the FlexVolume driver can successfully communicate with the volume plugin socket: {\"status\":\"Failure\",\"message\":\"configured to NOT create volumes\"} In the case of any other output, check if the backend volume plugin is alive with curl : curl --unix-socket /etc/hpe-storage/nimble.sock -d '{}' http://localhost/VolumeDriver.Capabilities It should output: {\"capabilities\":{\"scope\":\"global\"},\"Err\":\"\"}","title":"Connectivity"},{"location":"flexvolume_driver/container_provider/index.html#flexvolume_and_dynamic_provisioner_driver_logs","text":"Log files associated with the HPE Volume Driver for Kubernetes FlexVolume Plugin logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution. Some of the logs on the host are persisted which follow standard logrotate policies. FlexVolume driver logs: kubectl logs -f daemonset.apps/hpe-flexvolume-driver -n kube-system The logs are persisted at /var/log/hpe-docker-plugin.log and /var/log/dory.log Dynamic Provisioner logs: kubectl logs -f deployment.apps/hpe-dynamic-provisioner -n kube-system The logs are persisted at /var/log/hpe-dynamic-provisioner.log","title":"FlexVolume and dynamic provisioner driver logs"},{"location":"flexvolume_driver/container_provider/index.html#log_collector","text":"Log collector script hpe-logcollector.sh can be used to collect diagnostic logs using kubectl Download the script as follows: curl -O https://raw.githubusercontent.com/hpe-storage/flexvolume-driver/master/hpe-logcollector.sh chmod 555 hpe-logcollector.sh Usage: ./hpe-logcollector.sh -h Diagnostic Script to collect HPE Storage logs using kubectl Usage: hpe-logcollector.sh [-h|--help][--node-name NODE_NAME][-n|--namespace NAMESPACE][-a|--all] Where -h|--help Print the Usage text --node-name NODE_NAME where NODE_NAME is kubernetes Node Name needed to collect the hpe diagnostic logs of the Node -n|--namespace NAMESPACE where NAMESPACE is namespace of the pod deployment. default is kube-system -a|--all collect diagnostic logs of all the nodes.If nothing is specified logs would be collected from all the nodes","title":"Log Collector"},{"location":"flexvolume_driver/container_provider/index.html#advanced_configuration","text":"This section describes some of the advanced configuration steps available to tweak behavior of the HPE Volume Driver for Kubernetes FlexVolume Plugin.","title":"Advanced Configuration"},{"location":"flexvolume_driver/container_provider/index.html#set_defaults_at_the_compute_node_level","text":"During normal operations, defaults are set in either the ConfigMap or in a StorageClass itself. The picking order is: StorageClass ConfigMap driver.json Please see Diagnostics to locate the driver for your particular environment. Add this object to the configuration file, nimble.json , for example: { \"defaultOptions\": [{\"option1\": \"value1\"}, {\"option2\": \"value2\"}] } Where option1 and option2 are valid backend volume plugin create options. Note It's highly recommended to control defaults with StorageClass API objects or the ConfigMap .","title":"Set defaults at the compute node level"},{"location":"flexvolume_driver/container_provider/index.html#global_options","text":"Each driver supports setting certain \"global\" options in the ConfigMap . Some options are common, some are driver specific.","title":"Global options"},{"location":"flexvolume_driver/container_provider/index.html#common","text":"Parameter String Description volumeDir Text Root directory on the host to mount the volumes. This parameter needs correlation with the podsmountdir path in the volumeMounts stanzas of the deployment. logDebug Boolean Turn on debug logging, set to false by default.","title":"Common"},{"location":"flexvolume_driver/dory/index.html","text":"Expired content The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within. Introduction \u00b6 The Open Source project Dory was designed in 2017 to transition Docker Volume plugins to be used with Kubernetes. Dory is the shim between the FlexVolume exec calls to the Docker Volume API. The main repository is not currently maintained and the most up-to-date version lives in the HPE Volume Driver for Kubernetes FlexVolume Plugin repository where Dory is packaged as a privileged DaemonSet to support HPE storage products. There may be other forks associated with other Docker Volume plugins out there. Why is the driver called Dory? Dory speaks whale ! Dynamic Provisioning \u00b6 As the FlexVolume Plugin doesn't provide any dynamic provisioning, HPE designed a provisioner to work with Docker Volume plugins as well, Doryd, to have a complete solution for Docker Volume plugins. It's run as a Deployment and monitor PVC requests. FlexVolume Plugin in Kubernetes \u00b6 According to the Kubernetes SIG storage community , the FlexVolume Plugin interface will continue to be supported. Move to CSI \u00b6 HPE encourages using the available CSI drivers for Kubernetes 1.13 and newer where available.","title":"Index"},{"location":"flexvolume_driver/dory/index.html#introduction","text":"The Open Source project Dory was designed in 2017 to transition Docker Volume plugins to be used with Kubernetes. Dory is the shim between the FlexVolume exec calls to the Docker Volume API. The main repository is not currently maintained and the most up-to-date version lives in the HPE Volume Driver for Kubernetes FlexVolume Plugin repository where Dory is packaged as a privileged DaemonSet to support HPE storage products. There may be other forks associated with other Docker Volume plugins out there. Why is the driver called Dory? Dory speaks whale !","title":"Introduction"},{"location":"flexvolume_driver/dory/index.html#dynamic_provisioning","text":"As the FlexVolume Plugin doesn't provide any dynamic provisioning, HPE designed a provisioner to work with Docker Volume plugins as well, Doryd, to have a complete solution for Docker Volume plugins. It's run as a Deployment and monitor PVC requests.","title":"Dynamic Provisioning"},{"location":"flexvolume_driver/dory/index.html#flexvolume_plugin_in_kubernetes","text":"According to the Kubernetes SIG storage community , the FlexVolume Plugin interface will continue to be supported.","title":"FlexVolume Plugin in Kubernetes"},{"location":"flexvolume_driver/dory/index.html#move_to_csi","text":"HPE encourages using the available CSI drivers for Kubernetes 1.13 and newer where available.","title":"Move to CSI"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html","text":"Expired content The documentation described on this page may be obsolete and contain references to unsupported and deprecated software. Please reach out to your HPE representative if you think you need any of the components referenced within. Overview \u00b6 The HPE 3PAR and Primera Volume Plug-in for Docker leverages Ansible to deploy the 3PAR/Primera driver for Kubernetes in order to provide scalable and persistent storage for stateful applications. Important Using HPE 3PAR/Primera Storage with Kubernetes 1.15 and newer, please use the HPE CSI Driver for Kubernetes . Source code is available in the hpe-storage/python-hpedockerplugin GitHub repo. Overview Platform requirements HPE 3PAR/Primera Storage Platform Requirements Deploying to Kubernetes Step 1: Install Ansible Ansible: Connecting to remote nodes Ansible: Check your SSH connections Step 2: Clone the Github repository Step 3: Modify the Ansible hosts file Step 4: Create the properties file File Persona Example Configuration Multiple Backend Example Configuration Step 5: Run the Ansible playbook Step 6: Verify the installation Using Sample StorageClass Test and verify volume provisioning Use case specific examples Snapshot a volume Clone a volume Replicate a containerized volume Import (cutover) a volume Using overrides Upgrade Uninstall StorageClass parameters HPE 3PAR/Primera Storage StorageClass parameters Common parameters for Provisioning and Cloning Cloning/Snapshot parameters Import parameters Replication Support Diagnostics Troubleshooting FlexVolume driver Locations Connectivity ETCD HPE 3PAR/Primera FlexVolume and Dynamic Provisioner driver (doryd) logs Refer to the SPOCK page for the latest support matrix for HPE 3PAR and HPE Primera Volume Plug-in for Docker. Platform requirements \u00b6 The HPE 3PAR/Primera FlexVolume driver supports multiple backends that are based on a \"container provider\" architecture. HPE 3PAR/Primera Storage Platform Requirements \u00b6 Ensure that you have reviewed the System Requirements . Driver HPE 3PAR/Primera OS Version Release Notes v3.3.1 3PAR OS: 3.3.1 MU5+ Primera OS: 4.0+ v3.3.1 OpenShift Container Platform 3.9, 3.10 and 3.11. Kubernetes 1.10 and above. Redhat/CentOS 7.5+ Note: Refer to SPOCK page for the latest support matrix for HPE 3PAR and HPE Primera Volume Plug-in for Docker. Deploying to Kubernetes \u00b6 The recommended way to deploy and manage the HPE 3PAR and Primera Volume Plug-in for Kubernetes is to use Ansible. Use the following steps to configure Ansible to perform the installation. Step 1: Install Ansible \u00b6 Ensure that Ansible (v2.5 to v2.8) is installed. For more information, see Ansible Installation Guide . NOTE: Ansible only needs to be installed on the machine that will be performing the deployment. Ansible does not need to be installed on your Kubernetes cluster. $ pip install ansible $ ansible --version ansible 2.7.12 Ansible: Connecting to remote nodes \u00b6 Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does. Ansible: Check your SSH connections \u00b6 Confirm that you can connect using SSH to all the nodes in your Kubernetes cluster using the same username. If necessary, add your public SSH key to the authorized_keys file on those systems. Step 2: Clone the Github repository \u00b6 $ cd ~ $ git clone https://github.com/hpe-storage/python-hpedockerplugin Step 3: Modify the Ansible hosts file \u00b6 Modify the hosts file to define the Kubernetes/OpenShift Master and Worker nodes. Also define where the HPE etcd cluster will be deployed, this can be done within the cluster or on external servers. Shell $ vi python-hpedockerplugin/ansible_3par_docker_plugin/hosts Yaml [masters] 192.168.1.51 [workers] 192.168.1.52 192.168.1.53 [etcd] 192.168.1.51 192.168.1.52 192.168.1.53 Step 4: Create the properties file \u00b6 Create the properties/plugin_configuration_properties.yml based on your HPE 3PAR/Primera Storage array configuration. $ vi python-hpedockerplugin/ansible_3par_docker_plugin/properties/plugin_configuration_properties.yml NOTE: Some of the properties are mandatory and must be specified in the properties file while others are optional. INVENTORY: DEFAULT: #Mandatory Parameters-------------------------------------------------------------------------------- # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - iSCSI hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver hpe3par_ip: <3par_array_IP> hpe3par_username: <3par_user> hpe3par_password: <3par_password> #Specify the 3PAR port - 8080 default hpe3par_port: 8080 hpe3par_cpg: # Plugin version - Required only in DEFAULT backend volume_plugin: hpestorage/legacyvolumeplugin:3.3.1 # Dory installer version - Required for Openshift/Kubernetes setup # Supported versions are dory_installer_v31, dory_installer_v32 dory_installer_version: dory_installer_v32 #Optional Parameters-------------------------------------------------------------------------------- logging: DEBUG hpe3par_snapcpg: FC_r6 #hpe3par_iscsi_chap_enabled: True use_multipath: True #enforce_multipath: False #vlan_tag: True Available Properties Parameters Property Mandatory Default Value Description hpedockerplugin_driver Yes No default value ISCSI/FC driver (hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver/hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver) hpe3par_ip Yes No default value IP address of 3PAR array hpe3par_username Yes No default value 3PAR username hpe3par_password Yes No default value 3PAR password hpe3par_port Yes 8080 3PAR HTTP_PORT port hpe3par_cpg Yes No default value Primary user CPG volume_plugin Yes No default value Name of the docker volume image (only required with DEFAULT backend) encryptor_key No No default value Encryption key string for 3PAR password logging No INFO Log level hpe3par_debug No No default value 3PAR log level suppress_requests_ssl_warning No True Suppress request SSL warnings hpe3par_snapcpg No hpe3par_cpg Snapshot CPG hpe3par_iscsi_chap_enabled No False ISCSI chap toggle hpe3par_iscsi_ips No No default value Comma separated iscsi port IPs (only required if driver is ISCSI based) use_multipath No False Mutltipath toggle enforce_multipath No False Forcefully enforce multipath ssh_hosts_key_file No /root/.ssh/known_hosts Path to hosts key file quorum_witness_ip No No default value Quorum witness IP mount_prefix No No default value Alternate mount path prefix hpe3par_iscsi_ips No No default value Comma separated iscsi IPs. If not provided, all iscsi IPs will be read from the array and populated in hpe.conf vlan_tag No False Populates the iscsi_ips which are vlan tagged, only applicable if hpe3par_iscsi_ips is not specified replication_device No No default value Replication backend properties dory_installer_version No dory_installer_v32 Required for Openshift/Kubernetes setup. Dory installer version, supported versions are dory_installer_v31, dory_installer_v32 hpe3par_server_ip_pool Yes No default value This parameter is specific to fileshare. It can be specified as a mix of range of IPs and individual IPs delimited by comma. Each range or individual IP must be followed by the corresponding subnet mask delimited by semi-colon E.g.: IP-Range:Subnet-Mask,Individual-IP:SubnetMask hpe3par_default_fpg_size No No default value This parameter is specific to fileshare. Default fpg size, It must be in the range 1TiB to 64TiB. If not specified here, it defaults to 16TiB Hint Refer to Replication Support for details on enabling Replication support. File Persona Example Configuration \u00b6 #Mandatory Parameters for Filepersona--------------------------------------------------------------- DEFAULT_FILE: # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - File driver hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_file.HPE3PARFileDriver hpe3par_ip: 192.168.2.50 hpe3par_username: demo_user hpe3par_password: demo_pass hpe3par_cpg: demo_cpg hpe3par_port: 8080 hpe3par_server_ip_pool: 192.168.98.3-192.168.98.10:255.255.192.0 #Optional Parameters for Filepersona---------------------------------------------------------------- hpe3par_default_fpg_size: 16 Multiple Backend Example Configuration \u00b6 INVENTORY: DEFAULT: #Mandatory Parameters------------------------------------------------------------------------------- # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - iSCSI hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver hpe3par_ip: 192.168.1.50 hpe3par_username: 3paradm hpe3par_password: 3pardata hpe3par_port: 8080 hpe3par_cpg: FC_r6 # Plugin version - Required only in DEFAULT backend volume_plugin: hpestorage/legacyvolumeplugin:3.3.1 # Dory installer version - Required for Openshift/Kubernetes setup # Supported versions are dory_installer_v31, dory_installer_v32 dory_installer_version: dory_installer_v32 #Optional Parameters-------------------------------------------------------------------------------- #ssh_hosts_key_file: '/root/.ssh/known_hosts' logging: DEBUG #hpe3par_debug: True #suppress_requests_ssl_warning: True #hpe3par_snapcpg: FC_r6 #hpe3par_iscsi_chap_enabled: True #use_multipath: False #enforce_multipath: False #vlan_tag: True #Additional Backend (Optional)---------------------------------------------------------------------- 3PAR1: #Mandatory Parameters------------------------------------------------------------------------------- # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - Fibre Channel hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver hpe3par_ip: 192.168.2.50 hpe3par_username: 3paradm hpe3par_password: 3pardata hpe3par_port: 8080 hpe3par_cpg: FC_r6 #Optional Parameters-------------------------------------------------------------------------------- #ssh_hosts_key_file: '/root/.ssh/known_hosts' logging: DEBUG #hpe3par_debug: True #suppress_requests_ssl_warning: True hpe3par_snapcpg: FC_r6 #use_multipath: False #enforce_multipath: False Step 5: Run the Ansible playbook \u00b6 $ cd python-hpedockerplugin/ansible_3par_docker_plugin/ $ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml Step 6: Verify the installation \u00b6 Once playbook has completed successfully, the PLAY RECAP should look like below Installer should not show any failures and PLAY RECAP should look like below PLAY RECAP *********************************************************************** : ok=85 changed=33 unreachable=0 failed=0 : ok=76 changed=29 unreachable=0 failed=0 : ok=76 changed=29 unreachable=0 failed=0 : ok=70 changed=27 unreachable=0 failed=0 : ok=70 changed=27 unreachable=0 failed=0 localhost : ok=9 changed=3 unreachable=0 failed=0 Verify plugin installation on all nodes. $ docker ps | grep plugin; ssh \"docker ps | grep plugin\";ssh \"docker ps | grep plugin\";ssh \"docker ps | grep plugin\";ssh \"docker ps | grep plugin\" 51b9d4b1d591 hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container a43f6d8f5080 hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container a88af9f46a0d hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container 5b20f16ab3af hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container b0813a22cbd8 hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container Verify the HPE FlexVolume driver Pod is running. kubectl get pods -n kube-system | grep doryd NAME READY STATUS RESTARTS AGE kube-storage-controller-doryd-7dd487b446-xr6q2 1/1 Running 0 45s Using \u00b6 Get started using the FlexVolume driver by setting up StorageClass , PVC API objects. See Using for examples. These instructions are provided as an example on how to use the HPE 3PAR/Primera Volume Plug-in with a HPE 3PAR/Primera Storage Array. The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- < paste the YAML > ^D (CTRL + D) Tip Some of the examples supported by the HPE 3PAR/Primera FlexVolume driver are available for HPE 3PAR/Primera Storage in the GitHub repo. To get started, create a StorageClass API object referencing the hpe-secret and defining additional (optional) StorageClass parameters: Sample StorageClass \u00b6 Sample storage classes can be found for HPE 3PAR/Primera Storage . Test and verify volume provisioning \u00b6 Create a StorageClass with volume parameters as required. Change the CPG per your requirements. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold provisioner: hpe.com/hpe parameters: provisioning: 'full' cpg: 'SSD_r6' fsOwner: '1001:1001' Create a PersistentVolumeClaim . This makes sure a volume is created and provisioned on your behalf: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sc-gold-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 25Gi storageClassName: sc-gold Check that a new PersistentVolume is created based on your claim: $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE sc-gold-pvc-13336da3-7ca3-11e9-826c-00505692581f 25Gi RWO Delete Bound default/pvc-gold sc-gold 3s The above output means that the FlexVolume driver successfully provisioned a new volume and bound to the requesting PVC to a new PV . The volume is not attached to any node yet. It will only be attached to a node if a workload is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified container: kind: Pod apiVersion: v1 metadata: name: pod-nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: export mountPath: \"/usr/share/nginx/html\" volumes: - name: export persistentVolumeClaim: claimName: sc-gold-pvc Check if the pod is running successfully: $ kubectl get pod pod-nginx NAME READY STATUS RESTARTS AGE pod-nginx 1/1 Running 0 2m29s Use case specific examples \u00b6 This StorageClass examples help guide combinations of options when provisioning volumes. Snapshot a volume \u00b6 This StorageClass will create a snapshot of a \"production\" volume. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold-snap-mongo provisioner: hpe.com/hpe parameters: virtualCopyOf: \"sc-mongo-10dc1195-779b-11e9-b787-0050569bb07c\" Clone a volume \u00b6 This StorageClass will create clones of a \"production\" volume. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold-clone provisioner: hpe.com/hpe parameters: cloneOf: \"sc-gold-2a82c9e5-6213-11e9-8d53-0050569bb07c\" Replicate a containerized volume \u00b6 This StorageClass will add a standard backend volume to a 3PAR Replication Group. If the replicationGroup specified does not exist, the plugin will create one. See Replication Support for more details on configuring replication. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-mongodb-replicated provisioner: hpe.com/hpe parameters: provisioning: 'full' replicationGroup: 'mongodb-app1' Import (cutover) a volume \u00b6 This StorageClass will import an existing 3PAR/Primera volume to Kubernetes. The source volume needs to be offline for the import to succeed. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: import-clone-legacy-prod provisioner: hpe.com/hpe parameters: importVol: \"production-db-vol\" Using overrides \u00b6 The HPE Dynamic Provisioner for Kubernetes (doryd) understands a set of annotation keys a user can set on a PVC . If the corresponding keys exists in the list of the allowOverrides key in the StorageClass , the end-user can tweak certain aspects of the provisioning workflow. This opens up for very advanced data services. StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold provisioner: hpe.com/hpe parameters: provisioning: 'full' cpg: 'SSD_r6' fsOwner: '1001:1001' allowOverrides: provisioning,compression,cpg,fsOwner PersistentVolumeClaim object: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc annotations: hpe.com/provisioning: \"thin\" hpe.com/cpg: \"FC_r6\" spec: accessModes: - ReadWriteOnce resources: requests: storage: 25Gi storageClassName: sc-gold This will create a PV thinly provisioned using the FC-r6 cpg. Upgrade \u00b6 In order to upgrade the driver, simply modify the ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml used for the initial deployment and modify hpestorage/legacyvolumeplugin to the latest image from docker hub. For example: volume_plugin: hpestorage/legacyvolumeplugin:3.3 Change to: volume_plugin: hpestorage/legacyvolumeplugin:3.3.1 Re-run the installer. $ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml Uninstall \u00b6 Run the following to uninstall the FlexVolume driver from the cluster. $ cd ~ $ cd python-hpedockerplugin/ansible_3par_docker_plugin $ ansible-playbook -i hosts uninstall/uninstall_hpe_3par_volume_driver.yml StorageClass parameters \u00b6 This section highlights all the available StorageClass parameters that are supported. HPE 3PAR/Primera Storage StorageClass parameters \u00b6 A StorageClass is used to provision or clone an HPE 3PAR\\Primera Storage-backed persistent volume. It can also be used to import an existing HPE 3PAR/Primera Storage volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. A sample StorageClass is provided. Note These are optional parameters. Common parameters for Provisioning and Cloning \u00b6 These parameters are mutable betweeen a parent volume and creating a clone from a snapshot. Parameter Type Options Example size Integer - size: \"10\" provisioning thin, full, dedupe provisioning: \"thin\" flash-cache Text true, false flash-cache: \"true\" compression boolean true, false compression: \"true\" MountConflictDelay Integer - MountConflictDelay: \"30\" qos-name Text vvset name qos-name: \" \" replicationGroup Text 3PAR RCG name replicationGroup: \"Test-RCG\" fsOwner userId:groupId The user id and group id that should own the root directory of the filesystem. fsMode Octal digits 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. Cloning/Snapshot parameters \u00b6 Either use cloneOf and reference a PVC in the current namespace or use virtualCopyOf and reference a 3PAR/Primera volume name to snapshot/clone and import into Kubernetes. Parameter Type Options Example cloneOf Text volume name cloneOf: \"\" virtualCopyOf Text volume name virtualCopyOf: \"\" expirationHours Integer option of virtualCopyOf expirationHours: \"10\" retentionHours Integer option of virtualCopyOf retentionHours: \"10\" Import parameters \u00b6 Importing volumes to Kubernetes requires the source 3PAR/Primera volume to be offline. Parameter Type Description Example importVol Text volume name importVol: \"\" Replication Support \u00b6 The HPE 3PAR/Primer FlexVolume driver supports array based synchronous and asynchronous replication. In order to enable replication within the FlexVolume driver, the arrays need to be properly zoned, visible to the Kubernetes cluster, and replication configured. For Peer Persistence, a quorum witness will need to be configured. Once the replication is enabled at the array level, the FlexVolume driver will need to be configured. Important Replication support can be enabled during initial deployment through the plugin configuration file. In order to enable replication support post deployment, modify the plugin_configuration_properties.yml used for deployment, add the replication parameter section below, and re-run the Ansible installer. Edit the plugin_configuration_properties.yml file and edit the Optional Replication Section. INVENTORY: DEFAULT: #Mandatory Parameters------------------------------------------------------------------------------- # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - iSCSI hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver hpe3par_ip: hpe3par_username: hpe3par_password: hpe3par_port: 8080 hpe3par_cpg: FC_r6 # Plugin version - Required only in DEFAULT backend volume_plugin: hpestorage/legacyvolumeplugin:3.3.1 # Dory installer version - Required for Openshift/Kubernetes setup dory_installer_version: dory_installer_v32 #Optional Parameters-------------------------------------------------------------------------------- logging: DEBUG hpe3par_snapcpg: FC_r6 use_multipath: False enforce_multipath: False #Optional Replication Parameters-------------------------------------------------------------------- replication_device: backend_id: remote_3PAR #Quorum Witness required for Peer Persistence only #quorum_witness_ip: replication_mode: synchronous cpg_map: \"local_CPG:remote_CPG\" snap_cpg_map: \"local_copy_CPG:remote_copy_CPG\" hpe3par_ip: hpe3par_username: hpe3par_password: hpe3par_port: 8080 #vlan_tag: False Once the properties file is configured, you can proceed with the standard installation steps . Diagnostics \u00b6 This section outlines a few troubleshooting steps for the HPE 3PAR/Primera FlexVolume driver. This product is supported by HPE, please consult with your support organization prior attempting any configuration changes. Troubleshooting FlexVolume driver \u00b6 The FlexVolume driver is a binary executed by the kubelet to perform mount/unmount/attach/detach operations as workloads request storage resources. The binary relies on communicating with a socket on the host where the volume plugin responsible for the MUAD operations perform control-plane or data-plane operations against the backend system hosting the actual volumes. Locations \u00b6 The driver has a configuration file where certain defaults can be tweaked to accommodate a certain behavior. Under normal circumstances, this file does not need any tweaking. The name and the location of the binary varies based on Kubernetes distribution (the default 'exec' path) and what backend driver is being used. In a typical scenario, using 3PAR/Primera, this is expected: Binary: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe Config file: /etc/hpedockerplugin/hpe.conf Connectivity \u00b6 To verify the FlexVolume binary can actually communicate with the backend volume plugin, issue a faux mount request: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe mount no/op '{\"name\":\"myvol1\"}' If the FlexVolume driver can successfully communicate with the volume plugin socket: {\"status\":\"Failure\",\"message\":\"configured to NOT create volumes\"} In the case of any other output, check if the backend volume plugin is alive: $ docker volume create -d hpe -o help=backends It should output: ================================= NAME STATUS ================================= DEFAULT OK ETCD \u00b6 To verify the etcd members on nodes. $ /usr/bin/etcdctl --endpoints http://:23790 member list It should output: b70ca254f54dd23: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=true 236bf7d5cc7a32d4: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=false 445e80419ae8729b: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=false e340a5833e93861e: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=false f5b5599d719d376e: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=false HPE 3PAR/Primera FlexVolume and Dynamic Provisioner driver (doryd) logs \u00b6 Log files associated with the HPE 3PAR/Primera FlexVolume driver logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution. Some of the logs on the host are persisted which follow standard logrotate policies. HPE 3PAR/Primera FlexVolume logs: (per node) $ docker logs -f plugin_container Dynamic Provisioner logs: kubectl logs -f kube-storage-controller-doryd -n kube-system The logs are persisted at /var/log/hpe-dynamic-provisioner.log","title":"Index"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#overview","text":"The HPE 3PAR and Primera Volume Plug-in for Docker leverages Ansible to deploy the 3PAR/Primera driver for Kubernetes in order to provide scalable and persistent storage for stateful applications. Important Using HPE 3PAR/Primera Storage with Kubernetes 1.15 and newer, please use the HPE CSI Driver for Kubernetes . Source code is available in the hpe-storage/python-hpedockerplugin GitHub repo. Overview Platform requirements HPE 3PAR/Primera Storage Platform Requirements Deploying to Kubernetes Step 1: Install Ansible Ansible: Connecting to remote nodes Ansible: Check your SSH connections Step 2: Clone the Github repository Step 3: Modify the Ansible hosts file Step 4: Create the properties file File Persona Example Configuration Multiple Backend Example Configuration Step 5: Run the Ansible playbook Step 6: Verify the installation Using Sample StorageClass Test and verify volume provisioning Use case specific examples Snapshot a volume Clone a volume Replicate a containerized volume Import (cutover) a volume Using overrides Upgrade Uninstall StorageClass parameters HPE 3PAR/Primera Storage StorageClass parameters Common parameters for Provisioning and Cloning Cloning/Snapshot parameters Import parameters Replication Support Diagnostics Troubleshooting FlexVolume driver Locations Connectivity ETCD HPE 3PAR/Primera FlexVolume and Dynamic Provisioner driver (doryd) logs Refer to the SPOCK page for the latest support matrix for HPE 3PAR and HPE Primera Volume Plug-in for Docker.","title":"Overview"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#platform_requirements","text":"The HPE 3PAR/Primera FlexVolume driver supports multiple backends that are based on a \"container provider\" architecture.","title":"Platform requirements"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#hpe_3parprimera_storage_platform_requirements","text":"Ensure that you have reviewed the System Requirements . Driver HPE 3PAR/Primera OS Version Release Notes v3.3.1 3PAR OS: 3.3.1 MU5+ Primera OS: 4.0+ v3.3.1 OpenShift Container Platform 3.9, 3.10 and 3.11. Kubernetes 1.10 and above. Redhat/CentOS 7.5+ Note: Refer to SPOCK page for the latest support matrix for HPE 3PAR and HPE Primera Volume Plug-in for Docker.","title":"HPE 3PAR/Primera Storage Platform Requirements"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#deploying_to_kubernetes","text":"The recommended way to deploy and manage the HPE 3PAR and Primera Volume Plug-in for Kubernetes is to use Ansible. Use the following steps to configure Ansible to perform the installation.","title":"Deploying to Kubernetes"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#step_1_install_ansible","text":"Ensure that Ansible (v2.5 to v2.8) is installed. For more information, see Ansible Installation Guide . NOTE: Ansible only needs to be installed on the machine that will be performing the deployment. Ansible does not need to be installed on your Kubernetes cluster. $ pip install ansible $ ansible --version ansible 2.7.12","title":"Step 1: Install Ansible"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#ansible_connecting_to_remote_nodes","text":"Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.","title":"Ansible: Connecting to remote nodes"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#ansible_check_your_ssh_connections","text":"Confirm that you can connect using SSH to all the nodes in your Kubernetes cluster using the same username. If necessary, add your public SSH key to the authorized_keys file on those systems.","title":"Ansible: Check your SSH connections"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#step_2_clone_the_github_repository","text":"$ cd ~ $ git clone https://github.com/hpe-storage/python-hpedockerplugin","title":"Step 2: Clone the Github repository"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#step_3_modify_the_ansible_hosts_file","text":"Modify the hosts file to define the Kubernetes/OpenShift Master and Worker nodes. Also define where the HPE etcd cluster will be deployed, this can be done within the cluster or on external servers. Shell $ vi python-hpedockerplugin/ansible_3par_docker_plugin/hosts Yaml [masters] 192.168.1.51 [workers] 192.168.1.52 192.168.1.53 [etcd] 192.168.1.51 192.168.1.52 192.168.1.53","title":"Step 3: Modify the Ansible hosts file"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#step_4_create_the_properties_file","text":"Create the properties/plugin_configuration_properties.yml based on your HPE 3PAR/Primera Storage array configuration. $ vi python-hpedockerplugin/ansible_3par_docker_plugin/properties/plugin_configuration_properties.yml NOTE: Some of the properties are mandatory and must be specified in the properties file while others are optional. INVENTORY: DEFAULT: #Mandatory Parameters-------------------------------------------------------------------------------- # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - iSCSI hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver hpe3par_ip: <3par_array_IP> hpe3par_username: <3par_user> hpe3par_password: <3par_password> #Specify the 3PAR port - 8080 default hpe3par_port: 8080 hpe3par_cpg: # Plugin version - Required only in DEFAULT backend volume_plugin: hpestorage/legacyvolumeplugin:3.3.1 # Dory installer version - Required for Openshift/Kubernetes setup # Supported versions are dory_installer_v31, dory_installer_v32 dory_installer_version: dory_installer_v32 #Optional Parameters-------------------------------------------------------------------------------- logging: DEBUG hpe3par_snapcpg: FC_r6 #hpe3par_iscsi_chap_enabled: True use_multipath: True #enforce_multipath: False #vlan_tag: True Available Properties Parameters Property Mandatory Default Value Description hpedockerplugin_driver Yes No default value ISCSI/FC driver (hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver/hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver) hpe3par_ip Yes No default value IP address of 3PAR array hpe3par_username Yes No default value 3PAR username hpe3par_password Yes No default value 3PAR password hpe3par_port Yes 8080 3PAR HTTP_PORT port hpe3par_cpg Yes No default value Primary user CPG volume_plugin Yes No default value Name of the docker volume image (only required with DEFAULT backend) encryptor_key No No default value Encryption key string for 3PAR password logging No INFO Log level hpe3par_debug No No default value 3PAR log level suppress_requests_ssl_warning No True Suppress request SSL warnings hpe3par_snapcpg No hpe3par_cpg Snapshot CPG hpe3par_iscsi_chap_enabled No False ISCSI chap toggle hpe3par_iscsi_ips No No default value Comma separated iscsi port IPs (only required if driver is ISCSI based) use_multipath No False Mutltipath toggle enforce_multipath No False Forcefully enforce multipath ssh_hosts_key_file No /root/.ssh/known_hosts Path to hosts key file quorum_witness_ip No No default value Quorum witness IP mount_prefix No No default value Alternate mount path prefix hpe3par_iscsi_ips No No default value Comma separated iscsi IPs. If not provided, all iscsi IPs will be read from the array and populated in hpe.conf vlan_tag No False Populates the iscsi_ips which are vlan tagged, only applicable if hpe3par_iscsi_ips is not specified replication_device No No default value Replication backend properties dory_installer_version No dory_installer_v32 Required for Openshift/Kubernetes setup. Dory installer version, supported versions are dory_installer_v31, dory_installer_v32 hpe3par_server_ip_pool Yes No default value This parameter is specific to fileshare. It can be specified as a mix of range of IPs and individual IPs delimited by comma. Each range or individual IP must be followed by the corresponding subnet mask delimited by semi-colon E.g.: IP-Range:Subnet-Mask,Individual-IP:SubnetMask hpe3par_default_fpg_size No No default value This parameter is specific to fileshare. Default fpg size, It must be in the range 1TiB to 64TiB. If not specified here, it defaults to 16TiB Hint Refer to Replication Support for details on enabling Replication support.","title":"Step 4: Create the properties file"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#file_persona_example_configuration","text":"#Mandatory Parameters for Filepersona--------------------------------------------------------------- DEFAULT_FILE: # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - File driver hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_file.HPE3PARFileDriver hpe3par_ip: 192.168.2.50 hpe3par_username: demo_user hpe3par_password: demo_pass hpe3par_cpg: demo_cpg hpe3par_port: 8080 hpe3par_server_ip_pool: 192.168.98.3-192.168.98.10:255.255.192.0 #Optional Parameters for Filepersona---------------------------------------------------------------- hpe3par_default_fpg_size: 16","title":"File Persona Example Configuration"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#multiple_backend_example_configuration","text":"INVENTORY: DEFAULT: #Mandatory Parameters------------------------------------------------------------------------------- # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - iSCSI hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver hpe3par_ip: 192.168.1.50 hpe3par_username: 3paradm hpe3par_password: 3pardata hpe3par_port: 8080 hpe3par_cpg: FC_r6 # Plugin version - Required only in DEFAULT backend volume_plugin: hpestorage/legacyvolumeplugin:3.3.1 # Dory installer version - Required for Openshift/Kubernetes setup # Supported versions are dory_installer_v31, dory_installer_v32 dory_installer_version: dory_installer_v32 #Optional Parameters-------------------------------------------------------------------------------- #ssh_hosts_key_file: '/root/.ssh/known_hosts' logging: DEBUG #hpe3par_debug: True #suppress_requests_ssl_warning: True #hpe3par_snapcpg: FC_r6 #hpe3par_iscsi_chap_enabled: True #use_multipath: False #enforce_multipath: False #vlan_tag: True #Additional Backend (Optional)---------------------------------------------------------------------- 3PAR1: #Mandatory Parameters------------------------------------------------------------------------------- # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - Fibre Channel hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver hpe3par_ip: 192.168.2.50 hpe3par_username: 3paradm hpe3par_password: 3pardata hpe3par_port: 8080 hpe3par_cpg: FC_r6 #Optional Parameters-------------------------------------------------------------------------------- #ssh_hosts_key_file: '/root/.ssh/known_hosts' logging: DEBUG #hpe3par_debug: True #suppress_requests_ssl_warning: True hpe3par_snapcpg: FC_r6 #use_multipath: False #enforce_multipath: False","title":"Multiple Backend Example Configuration"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#step_5_run_the_ansible_playbook","text":"$ cd python-hpedockerplugin/ansible_3par_docker_plugin/ $ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml","title":"Step 5: Run the Ansible playbook"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#step_6_verify_the_installation","text":"Once playbook has completed successfully, the PLAY RECAP should look like below Installer should not show any failures and PLAY RECAP should look like below PLAY RECAP *********************************************************************** : ok=85 changed=33 unreachable=0 failed=0 : ok=76 changed=29 unreachable=0 failed=0 : ok=76 changed=29 unreachable=0 failed=0 : ok=70 changed=27 unreachable=0 failed=0 : ok=70 changed=27 unreachable=0 failed=0 localhost : ok=9 changed=3 unreachable=0 failed=0 Verify plugin installation on all nodes. $ docker ps | grep plugin; ssh \"docker ps | grep plugin\";ssh \"docker ps | grep plugin\";ssh \"docker ps | grep plugin\";ssh \"docker ps | grep plugin\" 51b9d4b1d591 hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container a43f6d8f5080 hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container a88af9f46a0d hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container 5b20f16ab3af hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container b0813a22cbd8 hpestorage/legacyvolumeplugin:3.3.1 \"/bin/sh -c ./plugin\u2026\" 12 minutes ago Up 12 minutes plugin_container Verify the HPE FlexVolume driver Pod is running. kubectl get pods -n kube-system | grep doryd NAME READY STATUS RESTARTS AGE kube-storage-controller-doryd-7dd487b446-xr6q2 1/1 Running 0 45s","title":"Step 6: Verify the installation"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#using","text":"Get started using the FlexVolume driver by setting up StorageClass , PVC API objects. See Using for examples. These instructions are provided as an example on how to use the HPE 3PAR/Primera Volume Plug-in with a HPE 3PAR/Primera Storage Array. The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- < paste the YAML > ^D (CTRL + D) Tip Some of the examples supported by the HPE 3PAR/Primera FlexVolume driver are available for HPE 3PAR/Primera Storage in the GitHub repo. To get started, create a StorageClass API object referencing the hpe-secret and defining additional (optional) StorageClass parameters:","title":"Using"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#sample_storageclass","text":"Sample storage classes can be found for HPE 3PAR/Primera Storage .","title":"Sample StorageClass"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#test_and_verify_volume_provisioning","text":"Create a StorageClass with volume parameters as required. Change the CPG per your requirements. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold provisioner: hpe.com/hpe parameters: provisioning: 'full' cpg: 'SSD_r6' fsOwner: '1001:1001' Create a PersistentVolumeClaim . This makes sure a volume is created and provisioned on your behalf: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sc-gold-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 25Gi storageClassName: sc-gold Check that a new PersistentVolume is created based on your claim: $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE sc-gold-pvc-13336da3-7ca3-11e9-826c-00505692581f 25Gi RWO Delete Bound default/pvc-gold sc-gold 3s The above output means that the FlexVolume driver successfully provisioned a new volume and bound to the requesting PVC to a new PV . The volume is not attached to any node yet. It will only be attached to a node if a workload is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified container: kind: Pod apiVersion: v1 metadata: name: pod-nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: export mountPath: \"/usr/share/nginx/html\" volumes: - name: export persistentVolumeClaim: claimName: sc-gold-pvc Check if the pod is running successfully: $ kubectl get pod pod-nginx NAME READY STATUS RESTARTS AGE pod-nginx 1/1 Running 0 2m29s","title":"Test and verify volume provisioning"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#use_case_specific_examples","text":"This StorageClass examples help guide combinations of options when provisioning volumes.","title":"Use case specific examples"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#snapshot_a_volume","text":"This StorageClass will create a snapshot of a \"production\" volume. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold-snap-mongo provisioner: hpe.com/hpe parameters: virtualCopyOf: \"sc-mongo-10dc1195-779b-11e9-b787-0050569bb07c\"","title":"Snapshot a volume"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#clone_a_volume","text":"This StorageClass will create clones of a \"production\" volume. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold-clone provisioner: hpe.com/hpe parameters: cloneOf: \"sc-gold-2a82c9e5-6213-11e9-8d53-0050569bb07c\"","title":"Clone a volume"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#replicate_a_containerized_volume","text":"This StorageClass will add a standard backend volume to a 3PAR Replication Group. If the replicationGroup specified does not exist, the plugin will create one. See Replication Support for more details on configuring replication. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-mongodb-replicated provisioner: hpe.com/hpe parameters: provisioning: 'full' replicationGroup: 'mongodb-app1'","title":"Replicate a containerized volume"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#import_cutover_a_volume","text":"This StorageClass will import an existing 3PAR/Primera volume to Kubernetes. The source volume needs to be offline for the import to succeed. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: import-clone-legacy-prod provisioner: hpe.com/hpe parameters: importVol: \"production-db-vol\"","title":"Import (cutover) a volume"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#using_overrides","text":"The HPE Dynamic Provisioner for Kubernetes (doryd) understands a set of annotation keys a user can set on a PVC . If the corresponding keys exists in the list of the allowOverrides key in the StorageClass , the end-user can tweak certain aspects of the provisioning workflow. This opens up for very advanced data services. StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold provisioner: hpe.com/hpe parameters: provisioning: 'full' cpg: 'SSD_r6' fsOwner: '1001:1001' allowOverrides: provisioning,compression,cpg,fsOwner PersistentVolumeClaim object: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc annotations: hpe.com/provisioning: \"thin\" hpe.com/cpg: \"FC_r6\" spec: accessModes: - ReadWriteOnce resources: requests: storage: 25Gi storageClassName: sc-gold This will create a PV thinly provisioned using the FC-r6 cpg.","title":"Using overrides"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#upgrade","text":"In order to upgrade the driver, simply modify the ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml used for the initial deployment and modify hpestorage/legacyvolumeplugin to the latest image from docker hub. For example: volume_plugin: hpestorage/legacyvolumeplugin:3.3 Change to: volume_plugin: hpestorage/legacyvolumeplugin:3.3.1 Re-run the installer. $ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml","title":"Upgrade"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#uninstall","text":"Run the following to uninstall the FlexVolume driver from the cluster. $ cd ~ $ cd python-hpedockerplugin/ansible_3par_docker_plugin $ ansible-playbook -i hosts uninstall/uninstall_hpe_3par_volume_driver.yml","title":"Uninstall"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#storageclass_parameters","text":"This section highlights all the available StorageClass parameters that are supported.","title":"StorageClass parameters"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#hpe_3parprimera_storage_storageclass_parameters","text":"A StorageClass is used to provision or clone an HPE 3PAR\\Primera Storage-backed persistent volume. It can also be used to import an existing HPE 3PAR/Primera Storage volume or clone of a snapshot into the Kubernetes cluster. The parameters are grouped below by those same workflows. A sample StorageClass is provided. Note These are optional parameters.","title":"HPE 3PAR/Primera Storage StorageClass parameters"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#common_parameters_for_provisioning_and_cloning","text":"These parameters are mutable betweeen a parent volume and creating a clone from a snapshot. Parameter Type Options Example size Integer - size: \"10\" provisioning thin, full, dedupe provisioning: \"thin\" flash-cache Text true, false flash-cache: \"true\" compression boolean true, false compression: \"true\" MountConflictDelay Integer - MountConflictDelay: \"30\" qos-name Text vvset name qos-name: \" \" replicationGroup Text 3PAR RCG name replicationGroup: \"Test-RCG\" fsOwner userId:groupId The user id and group id that should own the root directory of the filesystem. fsMode Octal digits 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem.","title":"Common parameters for Provisioning and Cloning"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#cloningsnapshot_parameters","text":"Either use cloneOf and reference a PVC in the current namespace or use virtualCopyOf and reference a 3PAR/Primera volume name to snapshot/clone and import into Kubernetes. Parameter Type Options Example cloneOf Text volume name cloneOf: \"\" virtualCopyOf Text volume name virtualCopyOf: \"\" expirationHours Integer option of virtualCopyOf expirationHours: \"10\" retentionHours Integer option of virtualCopyOf retentionHours: \"10\"","title":"Cloning/Snapshot parameters"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#import_parameters","text":"Importing volumes to Kubernetes requires the source 3PAR/Primera volume to be offline. Parameter Type Description Example importVol Text volume name importVol: \"\"","title":"Import parameters"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#replication_support","text":"The HPE 3PAR/Primer FlexVolume driver supports array based synchronous and asynchronous replication. In order to enable replication within the FlexVolume driver, the arrays need to be properly zoned, visible to the Kubernetes cluster, and replication configured. For Peer Persistence, a quorum witness will need to be configured. Once the replication is enabled at the array level, the FlexVolume driver will need to be configured. Important Replication support can be enabled during initial deployment through the plugin configuration file. In order to enable replication support post deployment, modify the plugin_configuration_properties.yml used for deployment, add the replication parameter section below, and re-run the Ansible installer. Edit the plugin_configuration_properties.yml file and edit the Optional Replication Section. INVENTORY: DEFAULT: #Mandatory Parameters------------------------------------------------------------------------------- # Specify the port to be used by HPE 3PAR plugin etcd cluster host_etcd_port_number: 23790 # Plugin Driver - iSCSI hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver hpe3par_ip: hpe3par_username: hpe3par_password: hpe3par_port: 8080 hpe3par_cpg: FC_r6 # Plugin version - Required only in DEFAULT backend volume_plugin: hpestorage/legacyvolumeplugin:3.3.1 # Dory installer version - Required for Openshift/Kubernetes setup dory_installer_version: dory_installer_v32 #Optional Parameters-------------------------------------------------------------------------------- logging: DEBUG hpe3par_snapcpg: FC_r6 use_multipath: False enforce_multipath: False #Optional Replication Parameters-------------------------------------------------------------------- replication_device: backend_id: remote_3PAR #Quorum Witness required for Peer Persistence only #quorum_witness_ip: replication_mode: synchronous cpg_map: \"local_CPG:remote_CPG\" snap_cpg_map: \"local_copy_CPG:remote_copy_CPG\" hpe3par_ip: hpe3par_username: hpe3par_password: hpe3par_port: 8080 #vlan_tag: False Once the properties file is configured, you can proceed with the standard installation steps .","title":"Replication Support"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#diagnostics","text":"This section outlines a few troubleshooting steps for the HPE 3PAR/Primera FlexVolume driver. This product is supported by HPE, please consult with your support organization prior attempting any configuration changes.","title":"Diagnostics"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#troubleshooting_flexvolume_driver","text":"The FlexVolume driver is a binary executed by the kubelet to perform mount/unmount/attach/detach operations as workloads request storage resources. The binary relies on communicating with a socket on the host where the volume plugin responsible for the MUAD operations perform control-plane or data-plane operations against the backend system hosting the actual volumes.","title":"Troubleshooting FlexVolume driver"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#locations","text":"The driver has a configuration file where certain defaults can be tweaked to accommodate a certain behavior. Under normal circumstances, this file does not need any tweaking. The name and the location of the binary varies based on Kubernetes distribution (the default 'exec' path) and what backend driver is being used. In a typical scenario, using 3PAR/Primera, this is expected: Binary: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe Config file: /etc/hpedockerplugin/hpe.conf","title":"Locations"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#connectivity","text":"To verify the FlexVolume binary can actually communicate with the backend volume plugin, issue a faux mount request: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe mount no/op '{\"name\":\"myvol1\"}' If the FlexVolume driver can successfully communicate with the volume plugin socket: {\"status\":\"Failure\",\"message\":\"configured to NOT create volumes\"} In the case of any other output, check if the backend volume plugin is alive: $ docker volume create -d hpe -o help=backends It should output: ================================= NAME STATUS ================================= DEFAULT OK","title":"Connectivity"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#etcd","text":"To verify the etcd members on nodes. $ /usr/bin/etcdctl --endpoints http://:23790 member list It should output: b70ca254f54dd23: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=true 236bf7d5cc7a32d4: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=false 445e80419ae8729b: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=false e340a5833e93861e: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=false f5b5599d719d376e: name= peerURLs=http://:23800 clientURLs=http://:23790 isLeader=false","title":"ETCD"},{"location":"flexvolume_driver/hpe_3par_primera_installer/index.html#hpe_3parprimera_flexvolume_and_dynamic_provisioner_driver_doryd_logs","text":"Log files associated with the HPE 3PAR/Primera FlexVolume driver logs data to the standard output stream. If the logs need to be retained for long term, use a standard logging solution. Some of the logs on the host are persisted which follow standard logrotate policies. HPE 3PAR/Primera FlexVolume logs: (per node) $ docker logs -f plugin_container Dynamic Provisioner logs: kubectl logs -f kube-storage-controller-doryd -n kube-system The logs are persisted at /var/log/hpe-dynamic-provisioner.log","title":"HPE 3PAR/Primera FlexVolume and Dynamic Provisioner driver (doryd) logs"},{"location":"learn/containers101/index.html","text":"Overview \u00b6 Welcome to the \"101\" section of SCOD. The goal of this section is to create a learning resource for individuals who want to learn about emerging topics in a cloud native world where containers are the focal point. The content is slightly biased towards storage. Mission Statement \u00b6 We aim to provide a learning resource collection that is generic enough to comprehend nuances in the different solutions and paradigms. Hewlett Packard Enterprise Products are highly likely referenced in some examples and resources. We can therefore not claim vendor neutrality nor a Switzerland opinion. External resources are the primary learning assets used to frame certain topics. Let's start the learning journey. Overview Mission Statement Cloud Native Computing Key Attributes Learning Resources Practical Exercises Cloud Native Tooling Key Attributes Learning Resources Practical Exercises Cloud Native Storage Key Attributes Learning Resources Practical Exercises Containers Intro Key Attributes Learning Resources Practical Exercises Container Tooling Key Attributes Learning Resources Practical Exercises Container Storage Key Attributes Learning Resources Practical Exercises DevOps Key Attributes Learning Resources Practical Exercises DevOps Tooling Key Attributes Learning Resources Practical Exercises DevOps Storage Key Attributes Learning Resources Practical Exercises Summary Cloud Native Computing \u00b6 The term \"cloud native\" stems from a software development model where resources are consumed as services. Compute, network and storage consumed through APIs, CLIs and web administration interfaces. Consumption is often modeled around paying only for what is being used. The applications deployed into Cloud Native Computing environments are often divided into small chunks that are operated independently, referred to as microservices. On the uprising is a broader adoption of a concept called serverless where your application runs only when called and is billed in milliseconds. Many public cloud vendors provide many already cloud native applications as services on their respective clouds. An example would be to consume a SQL database as a service rather than deploying and managing it by yourself. Key Attributes \u00b6 These are some of the key elements of Cloud Native Computing. Resources are provisioned through complete self-service. API first strategies to promote interoperability and collaboration. Separation of concerns in microservice architectures. High degree of automation of resource provisioning and deprovisioning. Modern languges and frameworks. Infrastructure-as-a-Service (IAAS) Learning Resources \u00b6 Curated list of learning resources for Cloud Native Computing. Webinar: What is Cloud Native and Why Does It Exist? A webinar by WeaveWorks endorsed by the CNCF (Cloud Native Computing Foundation). Market Overview: CNCF Cloud Native Interactive Landscape Many applications and vendors claim to be cloud native. This map is compiled by the CNCF. Reference: 12factor.net A design pattern for microservice architectures. Blog: The rise of cloud native programming languages A blog post that outlines the journey from bare-metal beyond serverless. Blog: 10 Key Attributes of Cloud-native Applications A blog post from thenewstack.io Practical Exercises \u00b6 How to get hands-on experience of Cloud Native Computing. Sign-up on any of the public clouds. Provision an instance and get remote access to the host OS of the instance. Deploy an \"as-a-service\" of an application technology you're familiar with. Connect a client from your instance to your provisioned service. Deploy either web server or Layer-4 load-balancer to give external access to your client application. Cloud Native Tooling \u00b6 Tools to interact with infrastructure and applications come in many shapes and forms. A common pattern is to learn by visually creating and deleting resources to understand an end-state. Once a pattern has been established, either APIs, 3rd party or a custom CLI is used to manage the life-cycle of the deployment in a declarative manner by manipulating RESTful APIs. Also known as Infrastructure-as-Code. Key Attributes \u00b6 These are some of the key elements of Cloud Native Computing Tooling. State stored in a Source Code Control System (SCCS). Changes made to state are peer reviewed and automatically tested in non-production environments before being merged and deployed. Industry standard IT automation tools are often used to implement changes. Ansible, Puppet, Salt and Chef are example tools. Public clouds often provide CLIs to manage resources. These are great to prepare, inspect and test deployments with. Configuration and deployment files are often written in a human and machine readable format, such as JSON, YAML or TOML. Learning Resources \u00b6 Curated list of learning resources for Cloud Native Computing Tooling. Blog: Imperative vs Declarative A blog that highlights the fundamental differences between the two. Reference: json.org Definitive guide on JavaScript Object Notation (JSON) data structures. Reference: YAML Syntax Simple guide for YAML Ain't Markup Language (YAML). Reference: RESTful API Tutorial Learn the design principles of REpresentational State Transfer (REST). Screencast: Super-basic Introduction to Ansible The simplest of Ansible tutorials starting with nothing. Practical Exercises \u00b6 How to get hands-on experience of Cloud Native Computing Tooling. Sign-up on AWS. Install the AWS CLI and Ansible in a Linux instance. Configure BOTO . Use the Ansible EC2 module to create and delete an instance. Cloud Native Storage \u00b6 Storage for cloud computing come in many shapes and forms. Compute instances boot off block devices provided by the IaaS through the hypervisor. More devices may be attached for application data to keep host OS and application separate. Most clouds allow these devices to be snapshotted, cloned and reattached to other instances. These block devices are normally offered with different backend media, such as flash or spinning disks. Depending on the use cases and budgets parameters may be tuned to be just right. For unstructured workloads, API driven object storage is the dominant technology due to the dramatic difference in cost and simplicity vs cloud provided block storage. An object is uploaded through an API endpoint with HTTP and automatically distributed (highly configurable) to provide high durability. The URL of the object will remain static for the duration of its lifetime. The main prohibitor for object storage adoption is that existing applications relying on POSIX filesystems need to be rewritten. Key Attributes \u00b6 These are some of the key elements of Cloud Native Storage. Provisioned and attached via APIs through IaaS if using block storage. Data and metadata is managed with RESTful APIs if using object. No backend to manage. Consumers use standard URLs to retrieve data. Highly durable with object storage. Durability equal to a local RAID device for block storage. Some cloud providers offer Filesystem-as-a-Service, normally standard NFS or CIFS. Backup and recovery of application data still needs to managed like traditional storage for block. Multi-region multi-copy persistence for object storage. Learning Resources \u00b6 Curated list of learning resources for Cloud Native Storage. Wikipedia: Object storage Digestible overview of Object Storage. Tutorial: Host images on Amazon S3 A five minute step-by-step guide how to host images on Amazon S3. Reference: Amazon EBS features An overview of typical attributes for cloud provided block storage. Reference: HPE Cloud Storage Cost Calculator Calculate the real costs of cloud storage based on highly dynamic data management environments. Practical Exercises \u00b6 How to get hands-on experience of Cloud Native Storage. Setup a S3 compatible object storage server or use a public cloud. Scality has a open source S3 server for non-production use. Configure s3cmd to upload and retrieve files from a bucket. Analyze costs of 100TB of data for one year on Amazon S3 vs Azure Manage Disks. Containers Intro \u00b6 A container is operating system-level virtualization and has been around for quite some time. By definition, the container share the kernel of the host and relies on certain abstractions to be useful. Docker the company made the technology approachable and incredibly more convenient than any predecessor. In the simplest of forms, a container image contains a virtual filesystem that contains only the dependencies the application needs. An example would be to include the Python interpreter if you wrote a program in Python. Containerized applications are primarily designed to run headless. In most cases these applications need to communicate with the outside world or allow inbound traffic depending on the application. Docker containers should be treated as transient, each instance starts in a known state and any data stored inside the virtual filesystem should be treated as ephemeral. This makes it extremely easy and convenient to upgrade and rollback a container. If data is required to persist between upgrades and rollbacks of the container, it needs to be stored outside of the container mapped from the host operating system. The wide adoption of containers are because they're lightweight, reproducible and run everywhere. Iterations of software delivery lifecycles may be cut down to seconds from weeks with the right processes and tools. Container images are layered per change made when the container is built. Each layer has a cryptographic hash and the layer itself can be shared between multiple containers readonly. When a new container is started from an image, the container runtime creates a COW (copy-on-write) filesystem where the particular container data is stored. This is in turn very effective as you only need one copy of a layer on the host. For example, if a bunch of applications are based off a Ubuntu base image, the base image only needs to be stored once on the host. Key Attributes \u00b6 These are some of the key elements of Containers. Runs on modern architectures and operating systems. Not necessarily as a single source image. Headless services (webservers, databases etc) in microservice architectures. Often orchestrated on compute clusters like Kubernetes, Apache Mesos Marathon or Docker Swarm. Software vendors often provide official and well tested container images for their applications. Learning Resources \u00b6 Curated list of learning resources for Containers. Interactive: Play with Docker Great interactive tutorials where you learn how to build, ship and run containers. Also has a follow-on interactive training on Kubernetes. Tutorial: Docker for beginners Comprehensive introduction to get started with Docker all the way to running it on a PaaS. Cartoon: The Illustrated Children's Guide to Kubernetes Illustrative and easy to grasp story of what Kubernetes is. Bonus cartoon: A Kubernetes story: Phippy goes to the zoo A high production quality cartoon explaining Kubernetes API objects. Blog: How to choose the right container orchestration and how to deploy it A brief overview of container orchestrators. Standards: opencontainers.org Components of a container system is standards based. The Open Container Initiative is the standards body. Blog/reference: Demystifying container runtimes Discusses different container runtime engines. Practical Exercises \u00b6 How to get hands-on experience of Containers. Install Docker Desktop or just Docker if using Linux. Click through the Get Started tutorial. Advanced: Run any of the images built in the tutorial on a public cloud service. Container Tooling \u00b6 Most of the tooling around containers is centered around what particular container orchestrator or development environment is being utilized. Usage of the tools differ greatly depending on the role of the user. As an operator the toolkit includes both IaaS and managing the platform to perform upgrades, user management and peripheral services such as storage and ingress load balancers. While many popular platforms today are based on Kubernetes, the tooling has nuances. Upstream Kubernetes uses kubectl , Red Hat OpenShift uses the OpenShift CLI, oc . With other platforms such as Rancher, nearly all management can be done through a web UI. Key Attributes \u00b6 These are some of the key elements of Container Tooling. Most tools are simple, yet powerful and follow UNIX principles of doing one thing and doing it well. The docker and kubectl CLIs are the two most dominant for low level management. Workload management usually relies on external tools for simplicity, such as docker-compose , kompose and helm . Some platforms have ancillary tools to marry the IaaS with the cluster orchestrator. Such an example is rke for Rancher and gkectl for GKE On-Prem. The public clouds have builtin container orchestrator and container management into their native CLIs, such as aws and gcloud . Client side tools normally rely on environment variables and user environment configuration files that store credentials, API endpoint locations and other security aspects. Learning Resources \u00b6 Curated list of learning resources for Container Tooling. Reference: Use the Docker command line Docker CLI reference. Reference: The kubectl Cheat Sheet The kubectl cheat sheet. Utility: kustomize.io Kubernetes native configuration managment. Tutorial: The Ultimate Guide to Podman, Skopeo and Buildah An alternative container toolchain to Docker using Podman, Buildah and Skopeo. Practical Exercises \u00b6 How to get hands-on experience of Container Tooling. Install Docker Desktop or just Docker if using Linux. Build a container image of an application you understand (docker build). Run the container image locally (docker run). Ship it to Docker Hub (docker push). Create an Amazon EKS cluster or equivalent. Retrieve the kubeconfig file. Run kubectl get nodes on your local machine. Start a Pod using the container image built in previous exercise. Container Storage \u00b6 Due to the ephemeral nature of a container, storage is predominantly served from the host the container is running on and is dependent on which container runtime is being used where data is stored. In the case of Docker, the overlay filesystems are under /var/lib/docker . If a certain path inside the container need to persist between upgrades, restarts on a different host or any other operation that will lose the locality of the data, the mount point needs to be replaced with a \"bind\" mount from the host. There are also container runtime technologies that are designed to persist the entire container, effectively treating the container more like a long-lived Virtual Machine. Examples are Canonical LXD, WeaveWorks Footloose and HPE BlueData. This is particularly important for applications that rely on its projected node info to remain static throughout its entire lifecycle. We can then begin to categorize containers into three main categories based on their lifecycle vs persistence needs. Stateless Containers No persistence needed across restarts/upgrades/rollbacks Stateful Containers Require certain mountpoints to persist across restarts/upgrades/rollbacks Persistent Containers Require static node identity information across restarts/upgrades/rollbacks Some modern Software-defined Storage solutions are offered to run alongside applications in a distributed fashion. Effectively enforcing multi-way replicas for reliability and eat into CPU and memory resources of the IaaS bill. This also introduces the dilemma of effectively locking the data into the container orchestrator and its compute nodes. Although it's convenient for developers to become self-entitled storage administrators. To stay in control of the data and remain mobile, storing data outside of the container orchestrator is preferable. Many container orchestrators provide plugins for external storage, some are built-in and some are supplied and supported by the storage vendor. Public clouds provide storage drivers for their IaaS storage services directly to the container orchestrator. This is widely popular pattern we're also seeing in BYO IaaS solutions such as VMware vSphere. Key Attributes \u00b6 These are some of the key elements of Container Storage. Ephemeral storage needs to be fast and expandable as environments scale with more diverse applications. Data for stateful containers is ideally stored outside of the container orchestrator, either the IaaS or external highly-available storage. Persistent containers require niche storage solution tightly coupled with the container runtime and the container orchestrator or scheduler. Most storage solutions provide an \"access mode\" often referred to as ReadWriteOnce (RWO) which only allow one Pod (in the Kubernetes case or containers from the same host access the volume. To allow multiple Pods and containers from multiple hosts, a distributed filesystem or an NFS server (widely adopted) is required to provide ReadWriteMany (RWX) access. Learning Resources \u00b6 Curated list of learning resources for Container Storage. Talk: Kubernetes Storage Lingo 101 A talk that lays out the nomenclature for storage in Kubernetes in an understandable way. Reference: Docker: Volumes Fundamental reference on how to make mount points persist for containers. Reference: Kubernetes: Volumes Using volumes in Kubernetes Pods. Podcast: Kubernetes Storage with Saad Ali Essential listen understand the difference between high-availability and automatic recovery. Practical Exercises \u00b6 How to get hands-on experience of Container Storage. Use Docker Desktop. Replace a mount point in an interactive container with a mount point from the host Deploy a Amazon EKS or equivalent cluster. Create a Persistent Volume Claim. Run kubectl get pv -o yaml and match the Persistent Volume against the IaaS block volumes. DevOps \u00b6 There are many interpretations of what DevOps \"is\". A bird's eye view is that there are people, processes and tools that come together to drive business outcomes through value streams. There are many core principles that could ultimately drive the outcome and no cookie cutter solution for any given organization. Breaking down problems into small pieces and creating safe systems to work in and eliminate toil are some of those principles. Agile development and lean manufacturing are both predecessors and role models for driving DevOps principles. Key Attributes \u00b6 These are some of the key elements of DevOps. Well-defined processes, safe and proportionally sized work units for each step in a value stream. Autonomy through tooling backed by well-defined processes. Operations, development and stakeholders unified behind common goals. Continuous improvement, robust feedback loops and problem \"swarming\" of value streams. All work in the value stream must be visible and measurable. From CEO and down buy-in to prevent failure of DevOps implementations. DevOps is essential to be successful when investing in a \"digital transformation\". Learning Resources \u00b6 Curated list of learning resources for DevOps. Opinions: Define DevOps: What is DevOps Industry voices defining what DevOps is and means. Blog: Toil: Finally a Name For a Problem We've All Felt Broad definition of toil. Hardbacks: DevOps Books Author Gene Kim has written novels and \"cookbooks\" of DevOps. Reference: The DevOps Institute Focuses on the human side of successfully implementing DevOps. Talk: Bank on Open Source for DevOps Success - Capital One How a disruptive company differentiate with DevOps at its core. Practical Exercises \u00b6 How to get hands-on experience of DevOps. Getting practical with DevOps requires an organization and a value stream. Listen to the The Phonenix Project for a glimpse into how to implement DevOps. DevOps Tooling \u00b6 The tools in DevOps are centered around the processes and value streams that support the business. Said tools also promote visibility, openness and collaboration. Inherently following security patterns, audit trails and safety. No one person should be able to misuse one tool to cause major disturbance in a value stream without quick remediation plans. Many times CI/CD (Continuous Integration, Continuous Delivery and/or Deployment) is considered synonymous with DevOps. That is both right and wrong. If the value stream inherently contains software, yes. Key Attributes \u00b6 These are some of the key elements of DevOps Tooling. Just the right amount of privileges for a particular task. Issue/project tracking, kanban, source code control, CI/CD, logging and reporting are essential. Visibility and traceability is a key element, no work should be hidden. By person or machine. Learning Resources \u00b6 Curated list of learning resources for DevOps Tooling. Reference: Periodic Table of DevOps Tools The most comprehensive chart of current and popular DevOps tools. Blog: Continuous integration vs. continuous delivery vs. continuous deployment Distinguish the components of CI/CD and what each facet encompass. Reference: Kanban Understanding Kanban helps understanding flow of work to adjust tools to work for humans, not against them. Practical Exercises \u00b6 How to get hands-on experience of DevOps Tooling. Study some of the tools available to perform automated tasks on complex systems. Jenkins Rundeck Ansible Tower Morpheus Data Delphix The common denominator across these platforms is the observability and the ability to limit scope of controls through Role-based Access Control (RBAC). Ensuring the tasks are well-defined, automated, scoped and safe to operate. DevOps Storage \u00b6 There aren't any particular storage paradigms (file/block/object) that are associated with DevOps. It's the implementation of the application and how it consumes storage that we vaguely may associate with DevOps. It's more of the practice that the right security controls are in place and whomever needs storage resource are fully self serviced. Human or machine. Key Attributes \u00b6 These are some of the key elements of DevOps Storage. API driven through RBAC. Ensuring automation may put in place for the endpoint or person that needs access to the resource. Rich data management. If a value stream only needs a low performing read-only view of a certain dataset, resources supporting the value stream should only have read-only access with performance constrains. Agile and mobile. At will, data should be made available for a certain application or resource for its purpose. Whether it's in the public cloud, on-prem or as-a-service through safe and secure automation. Learning Resources \u00b6 Curated list of learning resources for DevOps Storage. Blog: Is Your Storage Too Slow for DevOps? Characterization of DevOps Storage attributes. Practical Exercises \u00b6 How to get hands-on experience of DevOps Storage. Familiarize yourself with a storage system's RESTful API and automation capabilities. Deploy an Ansible Tower trial. Write an Ansible playbook that creates a storage resource on said system. Create a job in Ansible Tower with the playbook and make it available to a restricted user. Summary \u00b6 If you have any suggestions or comments, head over to GitHub and file a PR or leave an issue.","title":"Overview"},{"location":"learn/containers101/index.html#overview","text":"Welcome to the \"101\" section of SCOD. The goal of this section is to create a learning resource for individuals who want to learn about emerging topics in a cloud native world where containers are the focal point. The content is slightly biased towards storage.","title":"Overview"},{"location":"learn/containers101/index.html#mission_statement","text":"We aim to provide a learning resource collection that is generic enough to comprehend nuances in the different solutions and paradigms. Hewlett Packard Enterprise Products are highly likely referenced in some examples and resources. We can therefore not claim vendor neutrality nor a Switzerland opinion. External resources are the primary learning assets used to frame certain topics. Let's start the learning journey. Overview Mission Statement Cloud Native Computing Key Attributes Learning Resources Practical Exercises Cloud Native Tooling Key Attributes Learning Resources Practical Exercises Cloud Native Storage Key Attributes Learning Resources Practical Exercises Containers Intro Key Attributes Learning Resources Practical Exercises Container Tooling Key Attributes Learning Resources Practical Exercises Container Storage Key Attributes Learning Resources Practical Exercises DevOps Key Attributes Learning Resources Practical Exercises DevOps Tooling Key Attributes Learning Resources Practical Exercises DevOps Storage Key Attributes Learning Resources Practical Exercises Summary","title":"Mission Statement"},{"location":"learn/containers101/index.html#cloud_native_computing","text":"The term \"cloud native\" stems from a software development model where resources are consumed as services. Compute, network and storage consumed through APIs, CLIs and web administration interfaces. Consumption is often modeled around paying only for what is being used. The applications deployed into Cloud Native Computing environments are often divided into small chunks that are operated independently, referred to as microservices. On the uprising is a broader adoption of a concept called serverless where your application runs only when called and is billed in milliseconds. Many public cloud vendors provide many already cloud native applications as services on their respective clouds. An example would be to consume a SQL database as a service rather than deploying and managing it by yourself.","title":"Cloud Native Computing"},{"location":"learn/containers101/index.html#key_attributes","text":"These are some of the key elements of Cloud Native Computing. Resources are provisioned through complete self-service. API first strategies to promote interoperability and collaboration. Separation of concerns in microservice architectures. High degree of automation of resource provisioning and deprovisioning. Modern languges and frameworks. Infrastructure-as-a-Service (IAAS)","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources","text":"Curated list of learning resources for Cloud Native Computing. Webinar: What is Cloud Native and Why Does It Exist? A webinar by WeaveWorks endorsed by the CNCF (Cloud Native Computing Foundation). Market Overview: CNCF Cloud Native Interactive Landscape Many applications and vendors claim to be cloud native. This map is compiled by the CNCF. Reference: 12factor.net A design pattern for microservice architectures. Blog: The rise of cloud native programming languages A blog post that outlines the journey from bare-metal beyond serverless. Blog: 10 Key Attributes of Cloud-native Applications A blog post from thenewstack.io","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises","text":"How to get hands-on experience of Cloud Native Computing. Sign-up on any of the public clouds. Provision an instance and get remote access to the host OS of the instance. Deploy an \"as-a-service\" of an application technology you're familiar with. Connect a client from your instance to your provisioned service. Deploy either web server or Layer-4 load-balancer to give external access to your client application.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#cloud_native_tooling","text":"Tools to interact with infrastructure and applications come in many shapes and forms. A common pattern is to learn by visually creating and deleting resources to understand an end-state. Once a pattern has been established, either APIs, 3rd party or a custom CLI is used to manage the life-cycle of the deployment in a declarative manner by manipulating RESTful APIs. Also known as Infrastructure-as-Code.","title":"Cloud Native Tooling"},{"location":"learn/containers101/index.html#key_attributes_1","text":"These are some of the key elements of Cloud Native Computing Tooling. State stored in a Source Code Control System (SCCS). Changes made to state are peer reviewed and automatically tested in non-production environments before being merged and deployed. Industry standard IT automation tools are often used to implement changes. Ansible, Puppet, Salt and Chef are example tools. Public clouds often provide CLIs to manage resources. These are great to prepare, inspect and test deployments with. Configuration and deployment files are often written in a human and machine readable format, such as JSON, YAML or TOML.","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources_1","text":"Curated list of learning resources for Cloud Native Computing Tooling. Blog: Imperative vs Declarative A blog that highlights the fundamental differences between the two. Reference: json.org Definitive guide on JavaScript Object Notation (JSON) data structures. Reference: YAML Syntax Simple guide for YAML Ain't Markup Language (YAML). Reference: RESTful API Tutorial Learn the design principles of REpresentational State Transfer (REST). Screencast: Super-basic Introduction to Ansible The simplest of Ansible tutorials starting with nothing.","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises_1","text":"How to get hands-on experience of Cloud Native Computing Tooling. Sign-up on AWS. Install the AWS CLI and Ansible in a Linux instance. Configure BOTO . Use the Ansible EC2 module to create and delete an instance.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#cloud_native_storage","text":"Storage for cloud computing come in many shapes and forms. Compute instances boot off block devices provided by the IaaS through the hypervisor. More devices may be attached for application data to keep host OS and application separate. Most clouds allow these devices to be snapshotted, cloned and reattached to other instances. These block devices are normally offered with different backend media, such as flash or spinning disks. Depending on the use cases and budgets parameters may be tuned to be just right. For unstructured workloads, API driven object storage is the dominant technology due to the dramatic difference in cost and simplicity vs cloud provided block storage. An object is uploaded through an API endpoint with HTTP and automatically distributed (highly configurable) to provide high durability. The URL of the object will remain static for the duration of its lifetime. The main prohibitor for object storage adoption is that existing applications relying on POSIX filesystems need to be rewritten.","title":"Cloud Native Storage"},{"location":"learn/containers101/index.html#key_attributes_2","text":"These are some of the key elements of Cloud Native Storage. Provisioned and attached via APIs through IaaS if using block storage. Data and metadata is managed with RESTful APIs if using object. No backend to manage. Consumers use standard URLs to retrieve data. Highly durable with object storage. Durability equal to a local RAID device for block storage. Some cloud providers offer Filesystem-as-a-Service, normally standard NFS or CIFS. Backup and recovery of application data still needs to managed like traditional storage for block. Multi-region multi-copy persistence for object storage.","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources_2","text":"Curated list of learning resources for Cloud Native Storage. Wikipedia: Object storage Digestible overview of Object Storage. Tutorial: Host images on Amazon S3 A five minute step-by-step guide how to host images on Amazon S3. Reference: Amazon EBS features An overview of typical attributes for cloud provided block storage. Reference: HPE Cloud Storage Cost Calculator Calculate the real costs of cloud storage based on highly dynamic data management environments.","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises_2","text":"How to get hands-on experience of Cloud Native Storage. Setup a S3 compatible object storage server or use a public cloud. Scality has a open source S3 server for non-production use. Configure s3cmd to upload and retrieve files from a bucket. Analyze costs of 100TB of data for one year on Amazon S3 vs Azure Manage Disks.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#containers_intro","text":"A container is operating system-level virtualization and has been around for quite some time. By definition, the container share the kernel of the host and relies on certain abstractions to be useful. Docker the company made the technology approachable and incredibly more convenient than any predecessor. In the simplest of forms, a container image contains a virtual filesystem that contains only the dependencies the application needs. An example would be to include the Python interpreter if you wrote a program in Python. Containerized applications are primarily designed to run headless. In most cases these applications need to communicate with the outside world or allow inbound traffic depending on the application. Docker containers should be treated as transient, each instance starts in a known state and any data stored inside the virtual filesystem should be treated as ephemeral. This makes it extremely easy and convenient to upgrade and rollback a container. If data is required to persist between upgrades and rollbacks of the container, it needs to be stored outside of the container mapped from the host operating system. The wide adoption of containers are because they're lightweight, reproducible and run everywhere. Iterations of software delivery lifecycles may be cut down to seconds from weeks with the right processes and tools. Container images are layered per change made when the container is built. Each layer has a cryptographic hash and the layer itself can be shared between multiple containers readonly. When a new container is started from an image, the container runtime creates a COW (copy-on-write) filesystem where the particular container data is stored. This is in turn very effective as you only need one copy of a layer on the host. For example, if a bunch of applications are based off a Ubuntu base image, the base image only needs to be stored once on the host.","title":"Containers Intro"},{"location":"learn/containers101/index.html#key_attributes_3","text":"These are some of the key elements of Containers. Runs on modern architectures and operating systems. Not necessarily as a single source image. Headless services (webservers, databases etc) in microservice architectures. Often orchestrated on compute clusters like Kubernetes, Apache Mesos Marathon or Docker Swarm. Software vendors often provide official and well tested container images for their applications.","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources_3","text":"Curated list of learning resources for Containers. Interactive: Play with Docker Great interactive tutorials where you learn how to build, ship and run containers. Also has a follow-on interactive training on Kubernetes. Tutorial: Docker for beginners Comprehensive introduction to get started with Docker all the way to running it on a PaaS. Cartoon: The Illustrated Children's Guide to Kubernetes Illustrative and easy to grasp story of what Kubernetes is. Bonus cartoon: A Kubernetes story: Phippy goes to the zoo A high production quality cartoon explaining Kubernetes API objects. Blog: How to choose the right container orchestration and how to deploy it A brief overview of container orchestrators. Standards: opencontainers.org Components of a container system is standards based. The Open Container Initiative is the standards body. Blog/reference: Demystifying container runtimes Discusses different container runtime engines.","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises_3","text":"How to get hands-on experience of Containers. Install Docker Desktop or just Docker if using Linux. Click through the Get Started tutorial. Advanced: Run any of the images built in the tutorial on a public cloud service.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#container_tooling","text":"Most of the tooling around containers is centered around what particular container orchestrator or development environment is being utilized. Usage of the tools differ greatly depending on the role of the user. As an operator the toolkit includes both IaaS and managing the platform to perform upgrades, user management and peripheral services such as storage and ingress load balancers. While many popular platforms today are based on Kubernetes, the tooling has nuances. Upstream Kubernetes uses kubectl , Red Hat OpenShift uses the OpenShift CLI, oc . With other platforms such as Rancher, nearly all management can be done through a web UI.","title":"Container Tooling"},{"location":"learn/containers101/index.html#key_attributes_4","text":"These are some of the key elements of Container Tooling. Most tools are simple, yet powerful and follow UNIX principles of doing one thing and doing it well. The docker and kubectl CLIs are the two most dominant for low level management. Workload management usually relies on external tools for simplicity, such as docker-compose , kompose and helm . Some platforms have ancillary tools to marry the IaaS with the cluster orchestrator. Such an example is rke for Rancher and gkectl for GKE On-Prem. The public clouds have builtin container orchestrator and container management into their native CLIs, such as aws and gcloud . Client side tools normally rely on environment variables and user environment configuration files that store credentials, API endpoint locations and other security aspects.","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources_4","text":"Curated list of learning resources for Container Tooling. Reference: Use the Docker command line Docker CLI reference. Reference: The kubectl Cheat Sheet The kubectl cheat sheet. Utility: kustomize.io Kubernetes native configuration managment. Tutorial: The Ultimate Guide to Podman, Skopeo and Buildah An alternative container toolchain to Docker using Podman, Buildah and Skopeo.","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises_4","text":"How to get hands-on experience of Container Tooling. Install Docker Desktop or just Docker if using Linux. Build a container image of an application you understand (docker build). Run the container image locally (docker run). Ship it to Docker Hub (docker push). Create an Amazon EKS cluster or equivalent. Retrieve the kubeconfig file. Run kubectl get nodes on your local machine. Start a Pod using the container image built in previous exercise.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#container_storage","text":"Due to the ephemeral nature of a container, storage is predominantly served from the host the container is running on and is dependent on which container runtime is being used where data is stored. In the case of Docker, the overlay filesystems are under /var/lib/docker . If a certain path inside the container need to persist between upgrades, restarts on a different host or any other operation that will lose the locality of the data, the mount point needs to be replaced with a \"bind\" mount from the host. There are also container runtime technologies that are designed to persist the entire container, effectively treating the container more like a long-lived Virtual Machine. Examples are Canonical LXD, WeaveWorks Footloose and HPE BlueData. This is particularly important for applications that rely on its projected node info to remain static throughout its entire lifecycle. We can then begin to categorize containers into three main categories based on their lifecycle vs persistence needs. Stateless Containers No persistence needed across restarts/upgrades/rollbacks Stateful Containers Require certain mountpoints to persist across restarts/upgrades/rollbacks Persistent Containers Require static node identity information across restarts/upgrades/rollbacks Some modern Software-defined Storage solutions are offered to run alongside applications in a distributed fashion. Effectively enforcing multi-way replicas for reliability and eat into CPU and memory resources of the IaaS bill. This also introduces the dilemma of effectively locking the data into the container orchestrator and its compute nodes. Although it's convenient for developers to become self-entitled storage administrators. To stay in control of the data and remain mobile, storing data outside of the container orchestrator is preferable. Many container orchestrators provide plugins for external storage, some are built-in and some are supplied and supported by the storage vendor. Public clouds provide storage drivers for their IaaS storage services directly to the container orchestrator. This is widely popular pattern we're also seeing in BYO IaaS solutions such as VMware vSphere.","title":"Container Storage"},{"location":"learn/containers101/index.html#key_attributes_5","text":"These are some of the key elements of Container Storage. Ephemeral storage needs to be fast and expandable as environments scale with more diverse applications. Data for stateful containers is ideally stored outside of the container orchestrator, either the IaaS or external highly-available storage. Persistent containers require niche storage solution tightly coupled with the container runtime and the container orchestrator or scheduler. Most storage solutions provide an \"access mode\" often referred to as ReadWriteOnce (RWO) which only allow one Pod (in the Kubernetes case or containers from the same host access the volume. To allow multiple Pods and containers from multiple hosts, a distributed filesystem or an NFS server (widely adopted) is required to provide ReadWriteMany (RWX) access.","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources_5","text":"Curated list of learning resources for Container Storage. Talk: Kubernetes Storage Lingo 101 A talk that lays out the nomenclature for storage in Kubernetes in an understandable way. Reference: Docker: Volumes Fundamental reference on how to make mount points persist for containers. Reference: Kubernetes: Volumes Using volumes in Kubernetes Pods. Podcast: Kubernetes Storage with Saad Ali Essential listen understand the difference between high-availability and automatic recovery.","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises_5","text":"How to get hands-on experience of Container Storage. Use Docker Desktop. Replace a mount point in an interactive container with a mount point from the host Deploy a Amazon EKS or equivalent cluster. Create a Persistent Volume Claim. Run kubectl get pv -o yaml and match the Persistent Volume against the IaaS block volumes.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#devops","text":"There are many interpretations of what DevOps \"is\". A bird's eye view is that there are people, processes and tools that come together to drive business outcomes through value streams. There are many core principles that could ultimately drive the outcome and no cookie cutter solution for any given organization. Breaking down problems into small pieces and creating safe systems to work in and eliminate toil are some of those principles. Agile development and lean manufacturing are both predecessors and role models for driving DevOps principles.","title":"DevOps"},{"location":"learn/containers101/index.html#key_attributes_6","text":"These are some of the key elements of DevOps. Well-defined processes, safe and proportionally sized work units for each step in a value stream. Autonomy through tooling backed by well-defined processes. Operations, development and stakeholders unified behind common goals. Continuous improvement, robust feedback loops and problem \"swarming\" of value streams. All work in the value stream must be visible and measurable. From CEO and down buy-in to prevent failure of DevOps implementations. DevOps is essential to be successful when investing in a \"digital transformation\".","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources_6","text":"Curated list of learning resources for DevOps. Opinions: Define DevOps: What is DevOps Industry voices defining what DevOps is and means. Blog: Toil: Finally a Name For a Problem We've All Felt Broad definition of toil. Hardbacks: DevOps Books Author Gene Kim has written novels and \"cookbooks\" of DevOps. Reference: The DevOps Institute Focuses on the human side of successfully implementing DevOps. Talk: Bank on Open Source for DevOps Success - Capital One How a disruptive company differentiate with DevOps at its core.","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises_6","text":"How to get hands-on experience of DevOps. Getting practical with DevOps requires an organization and a value stream. Listen to the The Phonenix Project for a glimpse into how to implement DevOps.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#devops_tooling","text":"The tools in DevOps are centered around the processes and value streams that support the business. Said tools also promote visibility, openness and collaboration. Inherently following security patterns, audit trails and safety. No one person should be able to misuse one tool to cause major disturbance in a value stream without quick remediation plans. Many times CI/CD (Continuous Integration, Continuous Delivery and/or Deployment) is considered synonymous with DevOps. That is both right and wrong. If the value stream inherently contains software, yes.","title":"DevOps Tooling"},{"location":"learn/containers101/index.html#key_attributes_7","text":"These are some of the key elements of DevOps Tooling. Just the right amount of privileges for a particular task. Issue/project tracking, kanban, source code control, CI/CD, logging and reporting are essential. Visibility and traceability is a key element, no work should be hidden. By person or machine.","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources_7","text":"Curated list of learning resources for DevOps Tooling. Reference: Periodic Table of DevOps Tools The most comprehensive chart of current and popular DevOps tools. Blog: Continuous integration vs. continuous delivery vs. continuous deployment Distinguish the components of CI/CD and what each facet encompass. Reference: Kanban Understanding Kanban helps understanding flow of work to adjust tools to work for humans, not against them.","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises_7","text":"How to get hands-on experience of DevOps Tooling. Study some of the tools available to perform automated tasks on complex systems. Jenkins Rundeck Ansible Tower Morpheus Data Delphix The common denominator across these platforms is the observability and the ability to limit scope of controls through Role-based Access Control (RBAC). Ensuring the tasks are well-defined, automated, scoped and safe to operate.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#devops_storage","text":"There aren't any particular storage paradigms (file/block/object) that are associated with DevOps. It's the implementation of the application and how it consumes storage that we vaguely may associate with DevOps. It's more of the practice that the right security controls are in place and whomever needs storage resource are fully self serviced. Human or machine.","title":"DevOps Storage"},{"location":"learn/containers101/index.html#key_attributes_8","text":"These are some of the key elements of DevOps Storage. API driven through RBAC. Ensuring automation may put in place for the endpoint or person that needs access to the resource. Rich data management. If a value stream only needs a low performing read-only view of a certain dataset, resources supporting the value stream should only have read-only access with performance constrains. Agile and mobile. At will, data should be made available for a certain application or resource for its purpose. Whether it's in the public cloud, on-prem or as-a-service through safe and secure automation.","title":"Key Attributes"},{"location":"learn/containers101/index.html#learning_resources_8","text":"Curated list of learning resources for DevOps Storage. Blog: Is Your Storage Too Slow for DevOps? Characterization of DevOps Storage attributes.","title":"Learning Resources"},{"location":"learn/containers101/index.html#practical_exercises_8","text":"How to get hands-on experience of DevOps Storage. Familiarize yourself with a storage system's RESTful API and automation capabilities. Deploy an Ansible Tower trial. Write an Ansible playbook that creates a storage resource on said system. Create a job in Ansible Tower with the playbook and make it available to a restricted user.","title":"Practical Exercises"},{"location":"learn/containers101/index.html#summary","text":"If you have any suggestions or comments, head over to GitHub and file a PR or leave an issue.","title":"Summary"},{"location":"learn/csi_primitives/index.html","text":"Overview \u00b6 This tutorial was presented at KubeCon North America 2020 Virtual. Content is relevant up to Kubernetes 1.19. Presentation \u00b6 Watch on YouTube Hands-on Labs \u00b6 These are the Asciinema cast files used in the demo. If there's something in the demo you're particularly interested in, copy the text content from these embedded players. Lab 1: Install a CSI driver \u00b6 Cue on YouTube Lab 2: Dynamic provisioning \u00b6 Cue on YouTube Lab 3: Deploy a StatefulSet \u00b6 Cue on YouTube Lab 4: Create VolumeSnapshots \u00b6 Cue on YouTube Lab 5: Clone from VolumeSnapshots \u00b6 Cue on YouTube Lab 6: Clone from PVC \u00b6 Cue on YouTube Lab 7: Restore from VolumeSnapshots \u00b6 Cue on YouTube Lab 8: Using Raw Block Storage \u00b6 Cue on YouTube Lab 9: Install Rook to leverage Raw Block Storage \u00b6 Cue on YouTube Lab 10: Using Ephemeral Local Volumes \u00b6 Cue on YouTube Lab 11: Using Generic Ephemeral Volumes \u00b6 Cue on YouTube Additional resources \u00b6 Source files for the Asciinema cast files and slide deck is available on GitHub .","title":"Introduction to CSI Primitives"},{"location":"learn/csi_primitives/index.html#overview","text":"This tutorial was presented at KubeCon North America 2020 Virtual. Content is relevant up to Kubernetes 1.19.","title":"Overview"},{"location":"learn/csi_primitives/index.html#presentation","text":"Watch on YouTube","title":"Presentation"},{"location":"learn/csi_primitives/index.html#hands-on_labs","text":"These are the Asciinema cast files used in the demo. If there's something in the demo you're particularly interested in, copy the text content from these embedded players.","title":"Hands-on Labs"},{"location":"learn/csi_primitives/index.html#lab_1_install_a_csi_driver","text":"Cue on YouTube","title":"Lab 1: Install a CSI driver"},{"location":"learn/csi_primitives/index.html#lab_2_dynamic_provisioning","text":"Cue on YouTube","title":"Lab 2: Dynamic provisioning"},{"location":"learn/csi_primitives/index.html#lab_3_deploy_a_statefulset","text":"Cue on YouTube","title":"Lab 3: Deploy a StatefulSet"},{"location":"learn/csi_primitives/index.html#lab_4_create_volumesnapshots","text":"Cue on YouTube","title":"Lab 4: Create VolumeSnapshots"},{"location":"learn/csi_primitives/index.html#lab_5_clone_from_volumesnapshots","text":"Cue on YouTube","title":"Lab 5: Clone from VolumeSnapshots"},{"location":"learn/csi_primitives/index.html#lab_6_clone_from_pvc","text":"Cue on YouTube","title":"Lab 6: Clone from PVC"},{"location":"learn/csi_primitives/index.html#lab_7_restore_from_volumesnapshots","text":"Cue on YouTube","title":"Lab 7: Restore from VolumeSnapshots"},{"location":"learn/csi_primitives/index.html#lab_8_using_raw_block_storage","text":"Cue on YouTube","title":"Lab 8: Using Raw Block Storage"},{"location":"learn/csi_primitives/index.html#lab_9_install_rook_to_leverage_raw_block_storage","text":"Cue on YouTube","title":"Lab 9: Install Rook to leverage Raw Block Storage"},{"location":"learn/csi_primitives/index.html#lab_10_using_ephemeral_local_volumes","text":"Cue on YouTube","title":"Lab 10: Using Ephemeral Local Volumes"},{"location":"learn/csi_primitives/index.html#lab_11_using_generic_ephemeral_volumes","text":"Cue on YouTube","title":"Lab 11: Using Generic Ephemeral Volumes"},{"location":"learn/csi_primitives/index.html#additional_resources","text":"Source files for the Asciinema cast files and slide deck is available on GitHub .","title":"Additional resources"},{"location":"learn/csi_workshop/index.html","text":"Welcome to the Hack Shack! \u00b6 The recorded CSI workshop available in the Video Gallery is now available on-demand, as a self-paced and interactive workshop hosted by the HPE Developer Community. All you have to do is register here . A string of e-mails will setup your own sandbox to perform the exercises at your own pace. The environment will have a time restriction before resetting but you should have plenty of time to complete the workshop exercises. During the workshop, you'll discover the basics of the Container Storage Interface (CSI) on Kubernetes. Here is a glance at what is being covered: Discover StorageClasses Create and assign a PersistentVolumeClaim to a workload Resize a PersistentVolumeClaim Expose a raw block device to a Pod Create a VolumeSnapshot from a VolumeSnapshotClass Clone PersistentVolumeClaims from an existing claim or a VolumeSnapshot Declare an ephemeral inline volume for a Pod Annotate PersistentVolumeClaims to leverage StorageClass overrides Transparently provision an NFS server with the HPE CSI Driver and using the ReadWriteMany access mode When completed, please fill out the survey and let us know how we did! Happy Hacking!","title":"Interactive CSI Workshop"},{"location":"learn/csi_workshop/index.html#welcome_to_the_hack_shack","text":"The recorded CSI workshop available in the Video Gallery is now available on-demand, as a self-paced and interactive workshop hosted by the HPE Developer Community. All you have to do is register here . A string of e-mails will setup your own sandbox to perform the exercises at your own pace. The environment will have a time restriction before resetting but you should have plenty of time to complete the workshop exercises. During the workshop, you'll discover the basics of the Container Storage Interface (CSI) on Kubernetes. Here is a glance at what is being covered: Discover StorageClasses Create and assign a PersistentVolumeClaim to a workload Resize a PersistentVolumeClaim Expose a raw block device to a Pod Create a VolumeSnapshot from a VolumeSnapshotClass Clone PersistentVolumeClaims from an existing claim or a VolumeSnapshot Declare an ephemeral inline volume for a Pod Annotate PersistentVolumeClaims to leverage StorageClass overrides Transparently provision an NFS server with the HPE CSI Driver and using the ReadWriteMany access mode When completed, please fill out the survey and let us know how we did! Happy Hacking!","title":"Welcome to the Hack Shack!"},{"location":"learn/introduction_to_containers/index.html","text":"Interactive learning path \u00b6 The Storage Education team at HPE has put together an interactive learning path to introduce field engineers, architects and account executives to Docker and Kubernetes. The course material has an angle to help understand the role of storage in the world of containers. It's a great starting point if you're new to containers. Course 2-4 contains interactive labs in an immersive environment with downloadable lab guides that can be used outside of the lab environment. It's recommended to take the courses in order. Audience Course name Duration (estimated) 1 AE and SA Containers and market opportunity 20 minutes 2 AE and SA Introduction to containers 30 minutes 3 Technical AE and SA Introduction to Docker 45 minutes 4 Technical AE and SA Introduction to Kubernetes 45 minutes Important All courses require a HPE Passport account, either partner or employee.","title":"For HPE partners:
   Introduction to Containers"},{"location":"learn/introduction_to_containers/index.html#interactive_learning_path","text":"The Storage Education team at HPE has put together an interactive learning path to introduce field engineers, architects and account executives to Docker and Kubernetes. The course material has an angle to help understand the role of storage in the world of containers. It's a great starting point if you're new to containers. Course 2-4 contains interactive labs in an immersive environment with downloadable lab guides that can be used outside of the lab environment. It's recommended to take the courses in order. Audience Course name Duration (estimated) 1 AE and SA Containers and market opportunity 20 minutes 2 AE and SA Introduction to containers 30 minutes 3 Technical AE and SA Introduction to Docker 45 minutes 4 Technical AE and SA Introduction to Kubernetes 45 minutes Important All courses require a HPE Passport account, either partner or employee.","title":"Interactive learning path"},{"location":"learn/persistent_storage/index.html","text":"Overview \u00b6 This is a free learning resource from HPE which walks you through various exercises to get you familiar with Kubernetes and provisioning Persistent storage using HPE Nimble Storage and HPE Primera storage systems. This guide is by no means a comprehensive overview of the capabilities of Kubernetes but rather a getting started guide for individuals who wants to learn how to use Kubernetes with persistent storage. Overview Kubernetes cluster Control plane Nodes Kubernetes Objects Pods Persistent Volumes Namespaces Deployments Services Lab 1: Tour your cluster Overview of kubectl Syntax Getting to know your cluster: Lab 2: Deploy your first Pod (Stateless) Lab 3: Install the HPE CSI Driver for Kubernetes Installing the Helm chart Creating a Secret Creating a StorageClass Lab 4: Creating a Persistent Volume using HPE Storage Creating a PersistentVolumeClaim Lab 5: Deploying a Stateful Application using HPE Storage (WordPress) Optional Lab: Advanced Configuration Configuring additional storage backends Create a StorageClass with the new Secret Creating a PersistentVolumeClaim Cleanup (Optional) Kubernetes cluster \u00b6 In Kubernetes, nodes within a cluster pool together their resources (memory and CPU) to distribute workloads. A cluster is comprised of control plane and worker nodes that allow you to run your containerized workloads. Control plane \u00b6 The Kubernetes control plane is responsible for maintaining the desired state of your cluster. When you interact with Kubernetes, such as by using the kubectl command-line interface, you\u2019re communicating with your cluster\u2019s Kubernetes API services running on the control plane. Control plane refers to a collection of processes managing the cluster state. Nodes \u00b6 Kubernetes runs your workload by placing containers into Pods to run on Nodes . A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods . Kubernetes Objects \u00b6 Programs running on Kubernetes are packaged as containers which can run on Linux or Windows. A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Pods \u00b6 A Pod is the basic execution unit of a Kubernetes application\u2013the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod encapsulates an application\u2019s container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. Persistent Volumes \u00b6 Because programs running on your cluster aren\u2019t guaranteed to run on a specific node, data can\u2019t be saved to any arbitrary place in the file system. If a program tries to save data to a file for later, but is then relocated onto a new node, the file will no longer be where the program expects it to be. To store data permanently, Kubernetes uses a PersistentVolume . Local, external storage via SAN arrays, or cloud drives can be attached to the cluster as a PersistentVolume . Namespaces \u00b6 Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called Namespaces . Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple users. Deployments \u00b6 A Deployment provides declarative updates for Pods . You declare a desired state for your Pods in your Deployment and Kubernetes will manage it for you automatically. Services \u00b6 A Kubernetes Service object defines a policy for external clients to access an application within a cluster. By default, the container runtime uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for containers to communicate across nodes, there must be allocated ports on the machine\u2019s own IP address, which are then forwarded or proxied to the containers. Coordinating port allocations is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that Pods can communicate with other Pods , regardless of which host they land on. Kubernetes gives every Pod its own cluster-private IP address, through a Kubernetes Service object, so you do not need to explicitly create links between Pods or map container ports to host ports. This means that containers within a Pod can all reach each other\u2019s ports on localhost, and all Pods in a cluster can see each other without NAT. Lab 1: Tour your cluster \u00b6 All of this information presented here is taken from the official documentation found on kubernetes.io/docs . Overview of kubectl \u00b6 The Kubernetes command-line tool, kubectl , allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete list of kubectl operations, see Overview of kubectl on kubernetes.io. For more information on how to install and setup kubectl on Linux, Windows or MacOS, see Install and Set Up kubectl on kubernetes.io. Syntax \u00b6 Use the following syntax to run kubectl commands from your terminal window: kubectl [command] [TYPE] [NAME] [flags] where command , TYPE , NAME , and flags are: command : Specifies the operation that you want to perform on one or more resources, for example create, get, describe, delete. TYPE : Specifies the resource type. Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output: NAME : Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods . Get object example command: kubectl get nodes kubectl get node Describe object example command: kubectl describe node Create object example command kubectl create -f The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- (press Enter) < paste the YAML > (CTRL-D for Linux) or (^D for Mac users) Kubernetes Cheat Sheet Find more available commands at Kubernetes Cheat Sheet on kubernetes.io. Getting to know your cluster: \u00b6 Let's run through some simple kubectl commands to get familiar with your cluster. First we need to open a terminal window, the following commands can be run from a Windows, Linux or Mac. In this guide, we will be using the Window Subsystem for Linux (WSL) which allows us to have a Linux terminal within Windows. To start a WSL terminal session, click the Ubuntu icon in the Windows taskbar. It will open a terminal window. We will be working within this terminal through out this lab. In order to communicate with the Kubernetes cluster, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the --kubeconfig flag. You will need to request the kubeconfig file from your cluster administrator and copy the file to your local $HOME/.kube/ directory. You may need to create this directory. Once you have the kubeconfig file, you can view the config file: kubectl config view Check that kubectl and the config file are properly configured by getting the cluster state. kubectl cluster-info If you see a URL response, kubectl is correctly configured to access your cluster. The output is similar to this: $ kubectl cluster-info Kubernetes control plane is running at https://192.168.1.50:6443 KubeDNS is running at https://192.168.1.50:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Now let's look at the nodes within our cluster. kubectl get nodes You should see output similar to below. As you can see, each node has a role control-plane or as worker nodes (). $ kubectl get nodes NAME STATUS ROLES AGE VERSION kube-group1 Ready control-plane,master 2d18h v1.21.5 ... You can list pods. kubectl get pods Quiz Did you see any Pods listed when you ran kubectl get pods ? Why? If you don't see any Pods listed, it is because there are no Pods deployed within the \"default\" Namespace . Now run, kubectl get pods --all-namespaces . Does it look any different? Pay attention to the first column, NAMESPACES . In our case, we are working in the \"default\" Namespace . Depending on the type of application and your user access level, applications can be deployed within one or more Namespaces . If you don't see the object (deployment, pod, services, etc) you are looking for, double-check the Namespace it was deployed under and use the -n flag to view objects in other Namespaces . Once complete, type \"Clear\" to clear your terminal window. Lab 2: Deploy your first Pod (Stateless) \u00b6 A Pod is a collection of containers sharing a network and mount namespace and is the basic unit of deployment in Kubernetes. All containers in a Pod are scheduled on the same node. In our first demo we will deploy a stateless application that has no persistent storage attached. Without persistent storage, any modifications done to the application will be lost if that application is stopped or deleted. Here is a sample NGINX webserver deployment. apiVersion: apps/v1 kind: Deployment metadata: labels: run: nginx name: first-nginx-pod spec: replicas: 1 selector: matchLabels: run: nginx-first-pod template: metadata: labels: run: nginx-first-pod spec: containers: - image: nginx name: nginx Open a WSL terminal session, if you don't have one open already. At the prompt, we will start by deploying the NGINX example above, by running: kubectl create -f https://scod.hpedev.io/learn/persistent_storage/yaml/nginx-stateless-deployment.yaml We can see the Deployment was successfully created and the NGINX Pod is running. Note The Pod names will be unique to your deployment. $ kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE first-nginx-pod 1/1 1 1 38s $ kubectl get pods NAME READY STATUS RESTARTS AGE first-nginx-pod-8d7bb985-rrdv8 1/1 Running 0 10s Important In a Deployment , a Pod name is generated using the Deployment name and then a randomized hash (i.e. first-nginx-pod-8d7bb985-kql7t ) to ensure that each Pod has a unique name. During this lab exercise, make sure to reference the correct object names that are generated in each exercise. We can inspect the Pod further using the kubectl describe command. Note You can use tab completion to help with Kubernetes commands and objects. Start typing the first few letters of the command or Kubernetes object (i.e Pod ) name and hit TAB and it should autofill the name. kubectl describe pod The output should be similar to this. Note, the Pod name will be unique to your deployment. Name: first-nginx-pod-8d7bb985-rrdv8 Namespace: default Priority: 0 Node: kube-group1/10.90.200.11 Start Time: Mon, 01 Nov 2021 13:37:59 -0500 Labels: pod-template-hash=8d7bb985 run=nginx-first-pod Annotations: cni.projectcalico.org/podIP: 192.168.162.9/32 cni.projectcalico.org/podIPs: 192.168.162.9/32 Status: Running IP: 192.168.162.9 IPs: IP: 192.168.162.9 Controlled By: ReplicaSet/first-nginx-pod-8d7bb985 Containers: nginx: Container ID: docker://3610d71c054e6b8fdfffbf436511fda048731a456b9460ae768ae7db6e831398 Image: nginx Image ID: docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 Port: Host Port: State: Running Started: Mon, 01 Nov 2021 13:38:06 -0500 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7sbw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-w7sbw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m14s default-scheduler Successfully assigned default/first-nginx-pod-8d7bb985-rrdv8 to kube-group1 Normal Pulling 5m13s kubelet Pulling image \"nginx\" Normal Pulled 5m7s kubelet Successfully pulled image \"nginx\" in 5.95086952s Normal Created 5m7s kubelet Created container nginx Normal Started 5m7s kubelet Started container nginx Looking under the \"Events\" section is a great place to start when checking for issues or errors during Pod creation. At this stage, the NGINX application is only accessible from within the cluster. Use kubectl port-forward to expose the Pod temporarily outside of the cluster to your workstation. kubectl port-forward 80:80 The output should be similar to this: kubectl port-forward first-nginx-pod-8d7bb985-rrdv8 80:80 Forwarding from 127.0.0.1:80 -> 8080 Forwarding from [::1]:80 -> 8080 Note If you have something already running locally on port 80, modify the port-forward to an unused port (i.e. 5000:80). Port-forward is meant for temporarily exposing an application outside of a Kubernetes cluster. For a more permanent solution, look into Ingress Controllers. Finally, open a browser and go to http://127.0.0.1 and you should see the following. You have successfully deployed your first Kubernetes pod. With the Pod running, you can log in and explore the Pod . To do this, open a second terminal, by clicking on the WSL terminal icon again. The first terminal should have kubectl port-forward still running. Run: kubectl exec -it -- /bin/bash You can explore the Pod and run various commands. Some commands might not be available within the Pod . Why would that be? root@first-nginx-pod-8d7bb985-rrdv8:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 46G 8.0G 38G 18% / tmpfs 64M 0 64M 0% /dev tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/centos-root 46G 8.0G 38G 18% /etc/hosts shm 64M 0 64M 0% /dev/shm tmpfs 1.9G 12K 1.9G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 1.9G 0 1.9G 0% /proc/acpi tmpfs 1.9G 0 1.9G 0% /proc/scsi tmpfs 1.9G 0 1.9G 0% /sys/firmware While inside the container, you can also modify the webpage. echo \"

Hello from the HPE Storage Hands on Labs

\" > /usr/share/nginx/html/index.html Now switch back over to the browser and refresh the page (http://127.0.0.1), you should see the updated changes to the webpage. Once ready, switch back over to your second terminal, type exit to logout of the NGINX container and close that terminal. Back in your original terminal, use Ctrl+C to exit the port-forwarding. Since this is a stateless application, we will now demonstrate what happens if the NGINX Pod is lost. To do this, simply delete the Pod . kubectl delete pod Now run kubectl get pods to see that a new NGINX Pod has been created. Lets use kubectl port-forward again to look at the NGINX application. kubectl port-forward 80:80 Back in your browser, refresh the page (http://127.0.0.1) and you should the webpage has reverted back to its default state. Back in the terminal, use Ctrl+C to exit the port-forwarding and once ready, type clear to refresh your terminal. The NGINX application has reverted back to default because we didn't store the modifications we made to a location that would persist beyond the life of the container. There are many applications where persistence isn't critical (i.e. Google uses stateless containers for your browser web searches) as they perform computations that are either stored into an external database or passed to subsequent processes. As mission-critical workloads move into Kubernetes, the need for stateful containers is increasingly important. The following exercises will go through how to provision persistent storage to applications using the HPE CSI Driver for Kubernetes backed by HPE Primera or Nimble Storage. Lab 3: Install the HPE CSI Driver for Kubernetes \u00b6 To get started with the deployment of the HPE CSI Driver for Kbuernetes, the CSI driver is deployed using industry standard means, either a Helm chart or an Operator. For this tutorial, we will be using Helm to the deploy the HPE CSI driver for Kubernetes. The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on Artifact Hub . There, you will find the configuration and installation instructions for the chart. Note Helm is the package manager for Kubernetes. Software is delivered in a format called a \"chart\". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG file. Installing the Helm chart \u00b6 Open a WSL terminal session, if you don't have one open already. To install the chart with the name my-hpe-csi-driver , add the HPE CSI Driver for Kubernetes Helm repo. helm repo add hpe-storage https://hpe-storage.github.io/co-deployments helm repo update Install the latest chart. kubectl create ns hpe-storage helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage Wait a few minutes as the deployment finishes. Verify that everything is up and running correctly by listing out the Pods . kubectl get pods -n hpe-storage The output is similar to this: Note The Pod names will be unique to your deployment. $ kubectl get pods -n hpe-storage NAME READY STATUS RESTARTS AGE pod/hpe-csi-controller-6f9b8c6f7b-n7zcr 9/9 Running 0 7m41s pod/hpe-csi-node-npp59 2/2 Running 0 7m41s pod/nimble-csp-5f6cc8c744-rxgfk 1/1 Running 0 7m41s pod/primera3par-csp-7f78f498d5-4vq9r 1/1 Running 0 7m41s If all of the components show in Running state, then the HPE CSI Driver for Kubernetes and the corresponding Container Storage Providers (CSP) for HPE Alletra, Primera and Nimble Storage have been successfully deployed. Important With the HPE CSI Driver deployed, the rest of this guide is designed to demonstrate the usage of the CSI driver with HPE Primera or Nimble Storage. You will need to choose which storage system (HPE Primera or Nimble Storage) to use for the rest of the exercises. While the HPE CSI Driver supports connectivity to multiple backends, configurating multiple backends is outside of the scope of this lab guide. Creating a Secret \u00b6 Once the HPE CSI Driver has been deployed, a Secret needs to be created in order for the CSI driver to communicate to the HPE Primera or Nimble Storage. This Secret , which contains the storage system IP and credentials, is used by the CSI driver sidecars within the StorageClass to authenticate to a specific backend for various CSI operations. For more information, see adding an HPE storage backend Here is an example Secret . apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: primera3par-csp-svc servicePort: \"8080\" backend: 10.10.0.2 username: password: Download and modify, using the text editor of your choice, the Secret file with the backend IP per your environment. Nimble Storage wget https://raw.githubusercontent.com/hpe-storage/scod/master/docs/learn/persistent_storage/yaml/nimble-secret.yaml HPE Primera wget https://raw.githubusercontent.com/hpe-storage/scod/master/docs/learn/persistent_storage/yaml/primera-secret.yaml Save the file and create the Secret within the cluster. Nimble Storage kubectl create -f nimble-secret.yaml HPE Primera kubectl create -f primera-secret.yaml The Secret should now be available in the \"hpe-storage\" Namespace : kubectl -n hpe-storage get secret/custom-secret NAME TYPE DATA AGE custom-secret Opaque 5 1m If you made a mistake when creating the Secret , simply delete the object ( kubectl -n hpe-storage delete secret/custom-secret ) and repeat the steps above. Creating a StorageClass \u00b6 Now we will create a StorageClass that will be used in the following exercises. A StorageClass (SC) specifies which storage provisioner to use (in our case the HPE CSI Driver) and the volume parameters (such as Protection Templates, Performance Policies, CPG, etc.) for the volumes that we want to create which can be used to differentiate between storage levels and usages. This concept is sometimes called \u201cprofiles\u201d in other storage systems. A cluster can have multiple StorageClasses allowing users to create storage claims tailored for their specific application requirements. We will start by creating a StorageClass called hpe-standard . We will use the custom-secret created in the previous step and specify the hpe-storage namespace where the CSI driver was deployed. Here is an example StorageClasses for HPE Primera and Nimble Storage systems and some of the available volume parameters that can be defined. See the respective CSP for more elaborate examples. HPE Nimble Storage apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-standard annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/controller-expand-secret-name: custom-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage performancePolicy: \"SQL Server\" description: \"Volume from HPE CSI Driver\" accessProtocol: iscsi limitIops: \"76800\" allowOverrides: description,limitIops,performancePolicy allowVolumeExpansion: true HPE Primera apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-standard annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/controller-expand-secret-name: custom-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage cpg: SSD_r6 provisioningType: tpvv accessProtocol: iscsi allowOverrides: cpg,provisioningType allowVolumeExpansion: true Create the StorageClass within the cluster Nimble Storage kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/nimble-storageclass.yaml Primera kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/primera-storageclass.yaml We can verify the StorageClass is now available. kubectl get sc NAME PROVISIONER AGE hpe-standard (default) csi.hpe.com 2m Note You can create multiple StorageClasses to match the storage requirements of your applications. We set hpe-standard StorageClass as default using the annotation storageclass.kubernetes.io/is-default-class: \"true\" . There can only be one default StorageClass per cluster, for any additional StorageClasses set this to false . To learn more about configuring a default StorageClass , see Default StorageClass on kubernetes.io. Lab 4: Creating a Persistent Volume using HPE Storage \u00b6 With the HPE CSI Driver for Kubernetes deployed and a StorageClass available, we can now provision persistent volumes. A PersistentVolumeClaim (PVC) is a request for storage by a user. Claims can request storage of a specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany). The accessMode will be dependent on the type of storage system and the application requirements. Block storage like HPE Primera and Nimble Storage, provision volumes using ReadWriteOnce access mode where the volume can only be mounted to a single node within the cluster at a time. Any applications running on that node can access that volume. Applications deployed across multiple nodes within a cluster that require shared access ( ReadWriteMany ) to the same PersistentVolume will need to use NFS or a distribute storage system such as MapR, Gluster or Ceph. A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes . Creating a PersistentVolumeClaim \u00b6 With a StorageClass available, we can request an amount of storage for our application using a PersistentVolumeClaim . Here is a sample PVC . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi Note We don't have a StorageClass (SC) explicitly defined within this PVC therefore it will use the default StorageClass . You can use spec.storageClassName to override the default SC with another one available to the cluster. Create the PersistentVolumeClaim . kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/my-pvc.yaml We can see the my-pvc PersistentVolumeClaim was created. kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc Bound pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 50Gi RWO hpe-standard 72m Note The Persistent Volume name is a randomly generated name by Kubernetes. For consistent naming for your stateful applications, check out StatefulSet deployment model. These names can be used to track the volume back to the storage system. It is important to note that HPE Primera has a 30 character limit on volume names therefore the name will be truncated. For example: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 will be truncated to pvc-70d5caf8-7558-40e6-a8b7-77d on an HPE Primera system. We can inspect the PVC further for additional information including event logs for troubleshooting. kubectl describe pvc my-pvc Check the Events section to see if there were any issues during creation. The output is similar to this: $ kubectl describe pvc my-pvc Name: my-pvc Namespace: default StorageClass: hpe-standard Status: Bound Volume: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 Labels: Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com Finalizers: [kubernetes.io/pvc-protection] Capacity: 50Gi Access Modes: RWO VolumeMode: Filesystem Mounted By: Events: We can also inspect the PersistentVolume (PV) in a similar manner. Note, the volume name will be unique to your deployment. kubectl describe pv The output is similar to this: $ kubectl describe pv pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 Name: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 Labels: Annotations: pv.kubernetes.io/provisioned-by: csi.hpe.com Finalizers: [kubernetes.io/pv-protection] StorageClass: hpe-standard Status: Bound Claim: default/my-pvc Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 50Gi Node Affinity: Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: csi.hpe.com VolumeHandle: 063aba3d50ec99d866000000000000000000000001 ReadOnly: false VolumeAttributes: accessProtocol=iscsi allowOverrides=description,limitIops,performancePolicy description=Volume from HPE CSI Driver fsType=xfs limitIops=76800 performancePolicy=SQL Server storage.kubernetes.io/csiProvisionerIdentity=1583271972595-8081-csi.hpe.com volumeAccessMode=mount Events: With the describe command, you can see the volume parameters used to create this volume. In this case, Nimble Storage parameters performancePolicy , limitIops , etc. Important If the PVC is stuck in Pending state, double check the Secret and Namespace are correct within the StorageClass (sc) and that the volume parameters are valid. If necessary delete the object (sc or pvc) ( kubectl delete ) and repeat the steps above. Let's recap what we have learned. We created a default StorageClass for our volumes. We created a PVC that created a volume from the storageClass. We can use kubectl get to list the StorageClass , PVC and PV . We can use kubectl describe to get details on the StorageClass , PVC or PV At this point, we have validated the deployment of the HPE CSI Driver and are ready to deploy an application with persistent storage. Lab 5: Deploying a Stateful Application using HPE Storage (WordPress) \u00b6 To begin, we will create two PersistentVolumes for the WordPress application using the default hpe-standard StorageClass we created previously. If you don't have the hpe-standard StorageClass available, please refer to the StorageClass section for instructions on creating a StorageClass . Create a PersistentVolumeClaim for the MariaDB database that will used by WordPress. kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/wordpress-mariadb-pvc.yaml Next let's make another volume for the WordPress application. kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/my-wordpress-pvc.yaml Now verify the PersistentVolumes were created successfully. The output should be similar to the following. Note, the volume names will be unique to your deployment. kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-my-wordpress-mariadb-0 Bound pvc-1abdb7d7-374e-45b3-8fa1-534131ec7ec6 50Gi RWO hpe-standard 1m my-wordpress Bound pvc-ff6dc8fd-2b14-4726-b608-be8b27485603 20Gi RWO hpe-standard 1m The above output means that the HPE CSI Driver has successfully provisioned two volumes based upon the default hpe-standard StorageClass . At this stage, the volumes are not attached (exported) to any nodes yet. They will only be attached (exported) to a node once a scheduled workload requests the PersistentVolumeClaims . We will use Helm again to deploy WordPress using the PersistentVolumeClaims we just created. When WordPress is deployed, the volumes will be attached, formatted and mounted. The first step is to add the WordPress chart to Helm. The output should be similar to below. helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm search repo bitnami/wordpress NAME CHART VERSION APP VERSION DESCRIPTION bitnami/wordpress 11.0.13 5.7.2 Web publishing platform for building blogs and ... Next, deploy WordPress by setting the deployment parameter persistence.existingClaim= to the PVC my-wordpress created in the previous step. helm install my-wordpress bitnami/wordpress --version 9.2.1 --set service.type=ClusterIP,wordpressUsername=admin,wordpressPassword=adminpassword,mariadb.mariadbRootPassword=secretpassword,persistence.existingClaim=my-wordpress,allowEmptyPassword=false Check to verify that WordPress and MariaDB were deployed and are in the Running state. This may take a few minutes. Note The Pod names will be unique to your deployment. kubectl get pods NAME READY STATUS RESTARTS AGE my-wordpress-69b7976c85-9mfjv 1/1 Running 0 2m my-wordpress-mariadb-0 1/1 Running 0 2m Finally take a look at the WordPress site. Again, we can use kubectl port-forward to access the WordPress application and verify everything is working correctly. kubectl port-forward svc/my-wordpress 80:80 Note If you have something already running locally on port 80, modify the port-forward to an unused port (i.e. 5000:80). Open a browser on your workstation to http://127.0.0.1 and you should see, \"Hello World!\" . Access the admin console at: http://127.0.0.1/admin using the \"admin/adminpassword\" we specified when deploying the Helm Chart. Create a new blog post so you have data stored in the WordPress application. Happy Blogging! Once ready, hit \" Ctrl+C \" in your terminal to stop the port-forward . Verify the Wordpress application is using the my-wordpress and data-my-wordpress-mariadb-0 PersistentVolumeClaims . kubectl get pods -o=jsonpath='{.items[*].spec.volumes[*].persistentVolumeClaim.claimName}' With the WordPress application using persistent storage for the database and the application data, in the event of a crash of the WordPress application, the PVC will be remounted to the new Pod . Delete the WordPress Pod . kubectl delete pod For example. $ kubectl delete pod my-wordpress-69b7976c85-9mfjv pod \"my-wordpress-69b7976c85-9mfjv\" deleted Now if run kubectl get pods and you should see the WordPress Pod recreating itself with a new name. This may take a few minutes. Output should be similar to the following as the WordPress container is recreating. $ kubectl get pods NAME READY STATUS RESTARTS AGE my-wordpress-mariadb-0 1/1 Running 1 10m my-wordpress-7856df6756-m2nw8 0/1 ContainerCreating 0 33s Once the WordPress Pod is in Ready state, we can verify that the Wordpress application is still using the my-wordpress and data-my-wordpress-mariadb-0 PersistentVolumeClaims . kubectl get pods -o=jsonpath='{.items[*].spec.volumes[*].persistentVolumeClaim.claimName}' And finally, run kubectl port-forward again to see the changes made to the WordPress application survived deleting the application Pod . kubectl port-forward svc/my-wordpress 80:80 Open a browser on your workstation to http://127.0.0.1 and you should see your WordPress site running. This completes the tutorial of using the HPE CSI Driver with HPE storage to create Persistent Volumes within Kubernetes. This is just the beginning of the capabilities of the HPE Storage integrations within Kubernetes. We recommend exploring SCOD further and the specific HPE Storage CSP ( Nimble , Primera, and 3PAR ) to learn more. Optional Lab: Advanced Configuration \u00b6 Configuring additional storage backends \u00b6 It's not uncommon to have multiple HPE primary storage systems within the same environment, either the same family or different ones. This section walks through the scenario of managing multiple StorageClass and Secret API objects to represent an environment with multiple systems. To view the current Secrets in the hpe-storage Namespace (assuming default names): kubectl -n hpe-storage get secret NAME TYPE DATA AGE custom-secret Opaque 5 10m This Secret is used by the CSI sidecars in the StorageClass to authenticate to a specific backend for CSI operations. In order to add a new Secret or manage access to multiple backends, additional Secrets will need to be created per backend. In the previous steps, if you connected to Nimble Storage, create a new Secret for the Primera array or if you connected to Primera array above then create a Secret for the Nimble Storage. Secret Requirements Each Secret name must be unique. servicePort should be set to 8080 . Using your text editor of choice, create a new Secret , specify the name, Namespace , backend username, backend password and the backend IP address to be used by the CSP and save it as gold-secret.yaml . HPE Nimble Storage apiVersion: v1 kind: Secret metadata: name: gold-secret namespace: hpe-storage stringData: serviceName: nimble-csp-svc servicePort: \"8080\" backend: 192.168.1.2 username: admin password: admin HPE Primera apiVersion: v1 kind: Secret metadata: name: gold-secret namespace: hpe-storage stringData: serviceName: primera3par-csp-svc servicePort: \"8080\" backend: 10.10.0.2 username: 3paradm password: 3pardata Create the Secret using kubectl : kubectl create -f gold-secret.yaml You should now see the Secret in the \"hpe-storage\" Namespace : kubectl -n hpe-storage get secret NAME TYPE DATA AGE gold-secret Opaque 5 1m custom-secret Opaque 5 15m Create a StorageClass with the new Secret \u00b6 To use the new gold-secret , create a new StorageClass using the Secret and the necessary StorageClass parameters. Please see the requirements section of the respective CSP . We will start by creating a StorageClass called hpe-gold . We will use the gold-secret created in the previous step and specify the hpe-storage Namespace where the CSI driver was deployed. Note Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created. HPE Nimble Storage apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-gold provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: gold-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: gold-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: gold-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: gold-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/controller-expand-secret-name: gold-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage performancePolicy: \"SQL Server\" description: \"Volume from HPE CSI Driver\" accessProtocol: iscsi limitIops: \"76800\" allowOverrides: description,limitIops,performancePolicy allowVolumeExpansion: true HPE Primera apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-gold provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: gold-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: gold-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: gold-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: gold-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/controller-expand-secret-name: gold-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage cpg: SSD_r6 provisioningType: tpvv accessProtocol: iscsi allowOverrides: cpg,provisioningType allowVolumeExpansion: true We can verify the StorageClass is now available. kubectl get sc NAME PROVISIONER AGE hpe-standard (default) csi.hpe.com 15m hpe-gold csi.hpe.com 1m Note Don't forget to call out the StorageClass explicitly when creating PVCs from non-default StorageClasses . Creating a PersistentVolumeClaim \u00b6 With a StorageClass available, we can request an amount of storage for our application using a PersistentVolumeClaim . Using your text editor of choice, create a new PVC and save it as gold-pvc.yaml . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gold-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: hpe-gold Create the PersistentVolumeClaim . kubectl create -f gold-pvc.yaml We can see the my-pvc PersistentVolumeClaim was created. kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc Bound pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 50Gi RWO hpe-standard 72m gold-pvc Bound pvc-7a74d656-0b14-42a2-9437-e374a5d3bd68 50Gi RWO hpe-gold 1m You can see that the new PVC is using the new StorageClass which is backed by the additional storage backend allowing you to add additional flexibility to your containerized workloads and match the persistent storage requirements to the application. Cleanup (Optional) \u00b6 As others will be using this lab at a later time, we can clean up the objects that were deployed during this lab exercise. Note These steps may take a few minutes to complete. Please be patient and don't cancel out the process. Remove WordPress & NGINX deployments. helm uninstall my-wordpress && kubectl delete all --all Delete the PersistentVolumeClaims and related objects. kubectl delete pvc --all && kubectl delete sc --all Remove the HPE CSI Driver for Kubernetes. helm uninstall my-hpe-csi-driver -n hpe-storage It takes a couple minutes to cleanup the objects from the CSI driver. You can check the status: watch kubectl get all -n hpe-storage Once everything is removed, Ctrl+C to exit and finally you can remove the Namespace . kubectl delete ns hpe-storage","title":"Persistent Storage for Kubernetes"},{"location":"learn/persistent_storage/index.html#overview","text":"This is a free learning resource from HPE which walks you through various exercises to get you familiar with Kubernetes and provisioning Persistent storage using HPE Nimble Storage and HPE Primera storage systems. This guide is by no means a comprehensive overview of the capabilities of Kubernetes but rather a getting started guide for individuals who wants to learn how to use Kubernetes with persistent storage. Overview Kubernetes cluster Control plane Nodes Kubernetes Objects Pods Persistent Volumes Namespaces Deployments Services Lab 1: Tour your cluster Overview of kubectl Syntax Getting to know your cluster: Lab 2: Deploy your first Pod (Stateless) Lab 3: Install the HPE CSI Driver for Kubernetes Installing the Helm chart Creating a Secret Creating a StorageClass Lab 4: Creating a Persistent Volume using HPE Storage Creating a PersistentVolumeClaim Lab 5: Deploying a Stateful Application using HPE Storage (WordPress) Optional Lab: Advanced Configuration Configuring additional storage backends Create a StorageClass with the new Secret Creating a PersistentVolumeClaim Cleanup (Optional)","title":"Overview"},{"location":"learn/persistent_storage/index.html#kubernetes_cluster","text":"In Kubernetes, nodes within a cluster pool together their resources (memory and CPU) to distribute workloads. A cluster is comprised of control plane and worker nodes that allow you to run your containerized workloads.","title":"Kubernetes cluster"},{"location":"learn/persistent_storage/index.html#control_plane","text":"The Kubernetes control plane is responsible for maintaining the desired state of your cluster. When you interact with Kubernetes, such as by using the kubectl command-line interface, you\u2019re communicating with your cluster\u2019s Kubernetes API services running on the control plane. Control plane refers to a collection of processes managing the cluster state.","title":"Control plane"},{"location":"learn/persistent_storage/index.html#nodes","text":"Kubernetes runs your workload by placing containers into Pods to run on Nodes . A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods .","title":"Nodes"},{"location":"learn/persistent_storage/index.html#kubernetes_objects","text":"Programs running on Kubernetes are packaged as containers which can run on Linux or Windows. A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.","title":"Kubernetes Objects"},{"location":"learn/persistent_storage/index.html#pods","text":"A Pod is the basic execution unit of a Kubernetes application\u2013the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod encapsulates an application\u2019s container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run.","title":"Pods"},{"location":"learn/persistent_storage/index.html#persistent_volumes","text":"Because programs running on your cluster aren\u2019t guaranteed to run on a specific node, data can\u2019t be saved to any arbitrary place in the file system. If a program tries to save data to a file for later, but is then relocated onto a new node, the file will no longer be where the program expects it to be. To store data permanently, Kubernetes uses a PersistentVolume . Local, external storage via SAN arrays, or cloud drives can be attached to the cluster as a PersistentVolume .","title":"Persistent Volumes"},{"location":"learn/persistent_storage/index.html#namespaces","text":"Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called Namespaces . Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple users.","title":"Namespaces"},{"location":"learn/persistent_storage/index.html#deployments","text":"A Deployment provides declarative updates for Pods . You declare a desired state for your Pods in your Deployment and Kubernetes will manage it for you automatically.","title":"Deployments"},{"location":"learn/persistent_storage/index.html#services","text":"A Kubernetes Service object defines a policy for external clients to access an application within a cluster. By default, the container runtime uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for containers to communicate across nodes, there must be allocated ports on the machine\u2019s own IP address, which are then forwarded or proxied to the containers. Coordinating port allocations is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that Pods can communicate with other Pods , regardless of which host they land on. Kubernetes gives every Pod its own cluster-private IP address, through a Kubernetes Service object, so you do not need to explicitly create links between Pods or map container ports to host ports. This means that containers within a Pod can all reach each other\u2019s ports on localhost, and all Pods in a cluster can see each other without NAT.","title":"Services"},{"location":"learn/persistent_storage/index.html#lab_1_tour_your_cluster","text":"All of this information presented here is taken from the official documentation found on kubernetes.io/docs .","title":"Lab 1: Tour your cluster"},{"location":"learn/persistent_storage/index.html#overview_of_kubectl","text":"The Kubernetes command-line tool, kubectl , allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete list of kubectl operations, see Overview of kubectl on kubernetes.io. For more information on how to install and setup kubectl on Linux, Windows or MacOS, see Install and Set Up kubectl on kubernetes.io.","title":"Overview of kubectl"},{"location":"learn/persistent_storage/index.html#syntax","text":"Use the following syntax to run kubectl commands from your terminal window: kubectl [command] [TYPE] [NAME] [flags] where command , TYPE , NAME , and flags are: command : Specifies the operation that you want to perform on one or more resources, for example create, get, describe, delete. TYPE : Specifies the resource type. Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output: NAME : Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods . Get object example command: kubectl get nodes kubectl get node Describe object example command: kubectl describe node Create object example command kubectl create -f The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- (press Enter) < paste the YAML > (CTRL-D for Linux) or (^D for Mac users) Kubernetes Cheat Sheet Find more available commands at Kubernetes Cheat Sheet on kubernetes.io.","title":"Syntax"},{"location":"learn/persistent_storage/index.html#getting_to_know_your_cluster","text":"Let's run through some simple kubectl commands to get familiar with your cluster. First we need to open a terminal window, the following commands can be run from a Windows, Linux or Mac. In this guide, we will be using the Window Subsystem for Linux (WSL) which allows us to have a Linux terminal within Windows. To start a WSL terminal session, click the Ubuntu icon in the Windows taskbar. It will open a terminal window. We will be working within this terminal through out this lab. In order to communicate with the Kubernetes cluster, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the --kubeconfig flag. You will need to request the kubeconfig file from your cluster administrator and copy the file to your local $HOME/.kube/ directory. You may need to create this directory. Once you have the kubeconfig file, you can view the config file: kubectl config view Check that kubectl and the config file are properly configured by getting the cluster state. kubectl cluster-info If you see a URL response, kubectl is correctly configured to access your cluster. The output is similar to this: $ kubectl cluster-info Kubernetes control plane is running at https://192.168.1.50:6443 KubeDNS is running at https://192.168.1.50:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Now let's look at the nodes within our cluster. kubectl get nodes You should see output similar to below. As you can see, each node has a role control-plane or as worker nodes (). $ kubectl get nodes NAME STATUS ROLES AGE VERSION kube-group1 Ready control-plane,master 2d18h v1.21.5 ... You can list pods. kubectl get pods Quiz Did you see any Pods listed when you ran kubectl get pods ? Why? If you don't see any Pods listed, it is because there are no Pods deployed within the \"default\" Namespace . Now run, kubectl get pods --all-namespaces . Does it look any different? Pay attention to the first column, NAMESPACES . In our case, we are working in the \"default\" Namespace . Depending on the type of application and your user access level, applications can be deployed within one or more Namespaces . If you don't see the object (deployment, pod, services, etc) you are looking for, double-check the Namespace it was deployed under and use the -n flag to view objects in other Namespaces . Once complete, type \"Clear\" to clear your terminal window.","title":"Getting to know your cluster:"},{"location":"learn/persistent_storage/index.html#lab_2_deploy_your_first_pod_stateless","text":"A Pod is a collection of containers sharing a network and mount namespace and is the basic unit of deployment in Kubernetes. All containers in a Pod are scheduled on the same node. In our first demo we will deploy a stateless application that has no persistent storage attached. Without persistent storage, any modifications done to the application will be lost if that application is stopped or deleted. Here is a sample NGINX webserver deployment. apiVersion: apps/v1 kind: Deployment metadata: labels: run: nginx name: first-nginx-pod spec: replicas: 1 selector: matchLabels: run: nginx-first-pod template: metadata: labels: run: nginx-first-pod spec: containers: - image: nginx name: nginx Open a WSL terminal session, if you don't have one open already. At the prompt, we will start by deploying the NGINX example above, by running: kubectl create -f https://scod.hpedev.io/learn/persistent_storage/yaml/nginx-stateless-deployment.yaml We can see the Deployment was successfully created and the NGINX Pod is running. Note The Pod names will be unique to your deployment. $ kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE first-nginx-pod 1/1 1 1 38s $ kubectl get pods NAME READY STATUS RESTARTS AGE first-nginx-pod-8d7bb985-rrdv8 1/1 Running 0 10s Important In a Deployment , a Pod name is generated using the Deployment name and then a randomized hash (i.e. first-nginx-pod-8d7bb985-kql7t ) to ensure that each Pod has a unique name. During this lab exercise, make sure to reference the correct object names that are generated in each exercise. We can inspect the Pod further using the kubectl describe command. Note You can use tab completion to help with Kubernetes commands and objects. Start typing the first few letters of the command or Kubernetes object (i.e Pod ) name and hit TAB and it should autofill the name. kubectl describe pod The output should be similar to this. Note, the Pod name will be unique to your deployment. Name: first-nginx-pod-8d7bb985-rrdv8 Namespace: default Priority: 0 Node: kube-group1/10.90.200.11 Start Time: Mon, 01 Nov 2021 13:37:59 -0500 Labels: pod-template-hash=8d7bb985 run=nginx-first-pod Annotations: cni.projectcalico.org/podIP: 192.168.162.9/32 cni.projectcalico.org/podIPs: 192.168.162.9/32 Status: Running IP: 192.168.162.9 IPs: IP: 192.168.162.9 Controlled By: ReplicaSet/first-nginx-pod-8d7bb985 Containers: nginx: Container ID: docker://3610d71c054e6b8fdfffbf436511fda048731a456b9460ae768ae7db6e831398 Image: nginx Image ID: docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 Port: Host Port: State: Running Started: Mon, 01 Nov 2021 13:38:06 -0500 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7sbw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-w7sbw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m14s default-scheduler Successfully assigned default/first-nginx-pod-8d7bb985-rrdv8 to kube-group1 Normal Pulling 5m13s kubelet Pulling image \"nginx\" Normal Pulled 5m7s kubelet Successfully pulled image \"nginx\" in 5.95086952s Normal Created 5m7s kubelet Created container nginx Normal Started 5m7s kubelet Started container nginx Looking under the \"Events\" section is a great place to start when checking for issues or errors during Pod creation. At this stage, the NGINX application is only accessible from within the cluster. Use kubectl port-forward to expose the Pod temporarily outside of the cluster to your workstation. kubectl port-forward 80:80 The output should be similar to this: kubectl port-forward first-nginx-pod-8d7bb985-rrdv8 80:80 Forwarding from 127.0.0.1:80 -> 8080 Forwarding from [::1]:80 -> 8080 Note If you have something already running locally on port 80, modify the port-forward to an unused port (i.e. 5000:80). Port-forward is meant for temporarily exposing an application outside of a Kubernetes cluster. For a more permanent solution, look into Ingress Controllers. Finally, open a browser and go to http://127.0.0.1 and you should see the following. You have successfully deployed your first Kubernetes pod. With the Pod running, you can log in and explore the Pod . To do this, open a second terminal, by clicking on the WSL terminal icon again. The first terminal should have kubectl port-forward still running. Run: kubectl exec -it -- /bin/bash You can explore the Pod and run various commands. Some commands might not be available within the Pod . Why would that be? root@first-nginx-pod-8d7bb985-rrdv8:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 46G 8.0G 38G 18% / tmpfs 64M 0 64M 0% /dev tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/centos-root 46G 8.0G 38G 18% /etc/hosts shm 64M 0 64M 0% /dev/shm tmpfs 1.9G 12K 1.9G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 1.9G 0 1.9G 0% /proc/acpi tmpfs 1.9G 0 1.9G 0% /proc/scsi tmpfs 1.9G 0 1.9G 0% /sys/firmware While inside the container, you can also modify the webpage. echo \"

Hello from the HPE Storage Hands on Labs

\" > /usr/share/nginx/html/index.html Now switch back over to the browser and refresh the page (http://127.0.0.1), you should see the updated changes to the webpage. Once ready, switch back over to your second terminal, type exit to logout of the NGINX container and close that terminal. Back in your original terminal, use Ctrl+C to exit the port-forwarding. Since this is a stateless application, we will now demonstrate what happens if the NGINX Pod is lost. To do this, simply delete the Pod . kubectl delete pod Now run kubectl get pods to see that a new NGINX Pod has been created. Lets use kubectl port-forward again to look at the NGINX application. kubectl port-forward 80:80 Back in your browser, refresh the page (http://127.0.0.1) and you should the webpage has reverted back to its default state. Back in the terminal, use Ctrl+C to exit the port-forwarding and once ready, type clear to refresh your terminal. The NGINX application has reverted back to default because we didn't store the modifications we made to a location that would persist beyond the life of the container. There are many applications where persistence isn't critical (i.e. Google uses stateless containers for your browser web searches) as they perform computations that are either stored into an external database or passed to subsequent processes. As mission-critical workloads move into Kubernetes, the need for stateful containers is increasingly important. The following exercises will go through how to provision persistent storage to applications using the HPE CSI Driver for Kubernetes backed by HPE Primera or Nimble Storage.","title":"Lab 2: Deploy your first Pod (Stateless)"},{"location":"learn/persistent_storage/index.html#lab_3_install_the_hpe_csi_driver_for_kubernetes","text":"To get started with the deployment of the HPE CSI Driver for Kbuernetes, the CSI driver is deployed using industry standard means, either a Helm chart or an Operator. For this tutorial, we will be using Helm to the deploy the HPE CSI driver for Kubernetes. The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on Artifact Hub . There, you will find the configuration and installation instructions for the chart. Note Helm is the package manager for Kubernetes. Software is delivered in a format called a \"chart\". Helm is a standalone CLI that interacts with the Kubernetes API server using your KUBECONFIG file.","title":"Lab 3: Install the HPE CSI Driver for Kubernetes"},{"location":"learn/persistent_storage/index.html#installing_the_helm_chart","text":"Open a WSL terminal session, if you don't have one open already. To install the chart with the name my-hpe-csi-driver , add the HPE CSI Driver for Kubernetes Helm repo. helm repo add hpe-storage https://hpe-storage.github.io/co-deployments helm repo update Install the latest chart. kubectl create ns hpe-storage helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage Wait a few minutes as the deployment finishes. Verify that everything is up and running correctly by listing out the Pods . kubectl get pods -n hpe-storage The output is similar to this: Note The Pod names will be unique to your deployment. $ kubectl get pods -n hpe-storage NAME READY STATUS RESTARTS AGE pod/hpe-csi-controller-6f9b8c6f7b-n7zcr 9/9 Running 0 7m41s pod/hpe-csi-node-npp59 2/2 Running 0 7m41s pod/nimble-csp-5f6cc8c744-rxgfk 1/1 Running 0 7m41s pod/primera3par-csp-7f78f498d5-4vq9r 1/1 Running 0 7m41s If all of the components show in Running state, then the HPE CSI Driver for Kubernetes and the corresponding Container Storage Providers (CSP) for HPE Alletra, Primera and Nimble Storage have been successfully deployed. Important With the HPE CSI Driver deployed, the rest of this guide is designed to demonstrate the usage of the CSI driver with HPE Primera or Nimble Storage. You will need to choose which storage system (HPE Primera or Nimble Storage) to use for the rest of the exercises. While the HPE CSI Driver supports connectivity to multiple backends, configurating multiple backends is outside of the scope of this lab guide.","title":"Installing the Helm chart"},{"location":"learn/persistent_storage/index.html#creating_a_secret","text":"Once the HPE CSI Driver has been deployed, a Secret needs to be created in order for the CSI driver to communicate to the HPE Primera or Nimble Storage. This Secret , which contains the storage system IP and credentials, is used by the CSI driver sidecars within the StorageClass to authenticate to a specific backend for various CSI operations. For more information, see adding an HPE storage backend Here is an example Secret . apiVersion: v1 kind: Secret metadata: name: custom-secret namespace: hpe-storage stringData: serviceName: primera3par-csp-svc servicePort: \"8080\" backend: 10.10.0.2 username: password: Download and modify, using the text editor of your choice, the Secret file with the backend IP per your environment. Nimble Storage wget https://raw.githubusercontent.com/hpe-storage/scod/master/docs/learn/persistent_storage/yaml/nimble-secret.yaml HPE Primera wget https://raw.githubusercontent.com/hpe-storage/scod/master/docs/learn/persistent_storage/yaml/primera-secret.yaml Save the file and create the Secret within the cluster. Nimble Storage kubectl create -f nimble-secret.yaml HPE Primera kubectl create -f primera-secret.yaml The Secret should now be available in the \"hpe-storage\" Namespace : kubectl -n hpe-storage get secret/custom-secret NAME TYPE DATA AGE custom-secret Opaque 5 1m If you made a mistake when creating the Secret , simply delete the object ( kubectl -n hpe-storage delete secret/custom-secret ) and repeat the steps above.","title":"Creating a Secret"},{"location":"learn/persistent_storage/index.html#creating_a_storageclass","text":"Now we will create a StorageClass that will be used in the following exercises. A StorageClass (SC) specifies which storage provisioner to use (in our case the HPE CSI Driver) and the volume parameters (such as Protection Templates, Performance Policies, CPG, etc.) for the volumes that we want to create which can be used to differentiate between storage levels and usages. This concept is sometimes called \u201cprofiles\u201d in other storage systems. A cluster can have multiple StorageClasses allowing users to create storage claims tailored for their specific application requirements. We will start by creating a StorageClass called hpe-standard . We will use the custom-secret created in the previous step and specify the hpe-storage namespace where the CSI driver was deployed. Here is an example StorageClasses for HPE Primera and Nimble Storage systems and some of the available volume parameters that can be defined. See the respective CSP for more elaborate examples. HPE Nimble Storage apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-standard annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/controller-expand-secret-name: custom-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage performancePolicy: \"SQL Server\" description: \"Volume from HPE CSI Driver\" accessProtocol: iscsi limitIops: \"76800\" allowOverrides: description,limitIops,performancePolicy allowVolumeExpansion: true HPE Primera apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-standard annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: custom-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: custom-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: custom-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: custom-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/controller-expand-secret-name: custom-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage cpg: SSD_r6 provisioningType: tpvv accessProtocol: iscsi allowOverrides: cpg,provisioningType allowVolumeExpansion: true Create the StorageClass within the cluster Nimble Storage kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/nimble-storageclass.yaml Primera kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/primera-storageclass.yaml We can verify the StorageClass is now available. kubectl get sc NAME PROVISIONER AGE hpe-standard (default) csi.hpe.com 2m Note You can create multiple StorageClasses to match the storage requirements of your applications. We set hpe-standard StorageClass as default using the annotation storageclass.kubernetes.io/is-default-class: \"true\" . There can only be one default StorageClass per cluster, for any additional StorageClasses set this to false . To learn more about configuring a default StorageClass , see Default StorageClass on kubernetes.io.","title":"Creating a StorageClass"},{"location":"learn/persistent_storage/index.html#lab_4_creating_a_persistent_volume_using_hpe_storage","text":"With the HPE CSI Driver for Kubernetes deployed and a StorageClass available, we can now provision persistent volumes. A PersistentVolumeClaim (PVC) is a request for storage by a user. Claims can request storage of a specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany). The accessMode will be dependent on the type of storage system and the application requirements. Block storage like HPE Primera and Nimble Storage, provision volumes using ReadWriteOnce access mode where the volume can only be mounted to a single node within the cluster at a time. Any applications running on that node can access that volume. Applications deployed across multiple nodes within a cluster that require shared access ( ReadWriteMany ) to the same PersistentVolume will need to use NFS or a distribute storage system such as MapR, Gluster or Ceph. A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes .","title":"Lab 4: Creating a Persistent Volume using HPE Storage"},{"location":"learn/persistent_storage/index.html#creating_a_persistentvolumeclaim","text":"With a StorageClass available, we can request an amount of storage for our application using a PersistentVolumeClaim . Here is a sample PVC . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi Note We don't have a StorageClass (SC) explicitly defined within this PVC therefore it will use the default StorageClass . You can use spec.storageClassName to override the default SC with another one available to the cluster. Create the PersistentVolumeClaim . kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/my-pvc.yaml We can see the my-pvc PersistentVolumeClaim was created. kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc Bound pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 50Gi RWO hpe-standard 72m Note The Persistent Volume name is a randomly generated name by Kubernetes. For consistent naming for your stateful applications, check out StatefulSet deployment model. These names can be used to track the volume back to the storage system. It is important to note that HPE Primera has a 30 character limit on volume names therefore the name will be truncated. For example: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 will be truncated to pvc-70d5caf8-7558-40e6-a8b7-77d on an HPE Primera system. We can inspect the PVC further for additional information including event logs for troubleshooting. kubectl describe pvc my-pvc Check the Events section to see if there were any issues during creation. The output is similar to this: $ kubectl describe pvc my-pvc Name: my-pvc Namespace: default StorageClass: hpe-standard Status: Bound Volume: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 Labels: Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com Finalizers: [kubernetes.io/pvc-protection] Capacity: 50Gi Access Modes: RWO VolumeMode: Filesystem Mounted By: Events: We can also inspect the PersistentVolume (PV) in a similar manner. Note, the volume name will be unique to your deployment. kubectl describe pv The output is similar to this: $ kubectl describe pv pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 Name: pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 Labels: Annotations: pv.kubernetes.io/provisioned-by: csi.hpe.com Finalizers: [kubernetes.io/pv-protection] StorageClass: hpe-standard Status: Bound Claim: default/my-pvc Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 50Gi Node Affinity: Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: csi.hpe.com VolumeHandle: 063aba3d50ec99d866000000000000000000000001 ReadOnly: false VolumeAttributes: accessProtocol=iscsi allowOverrides=description,limitIops,performancePolicy description=Volume from HPE CSI Driver fsType=xfs limitIops=76800 performancePolicy=SQL Server storage.kubernetes.io/csiProvisionerIdentity=1583271972595-8081-csi.hpe.com volumeAccessMode=mount Events: With the describe command, you can see the volume parameters used to create this volume. In this case, Nimble Storage parameters performancePolicy , limitIops , etc. Important If the PVC is stuck in Pending state, double check the Secret and Namespace are correct within the StorageClass (sc) and that the volume parameters are valid. If necessary delete the object (sc or pvc) ( kubectl delete ) and repeat the steps above. Let's recap what we have learned. We created a default StorageClass for our volumes. We created a PVC that created a volume from the storageClass. We can use kubectl get to list the StorageClass , PVC and PV . We can use kubectl describe to get details on the StorageClass , PVC or PV At this point, we have validated the deployment of the HPE CSI Driver and are ready to deploy an application with persistent storage.","title":"Creating a PersistentVolumeClaim"},{"location":"learn/persistent_storage/index.html#lab_5_deploying_a_stateful_application_using_hpe_storage_wordpress","text":"To begin, we will create two PersistentVolumes for the WordPress application using the default hpe-standard StorageClass we created previously. If you don't have the hpe-standard StorageClass available, please refer to the StorageClass section for instructions on creating a StorageClass . Create a PersistentVolumeClaim for the MariaDB database that will used by WordPress. kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/wordpress-mariadb-pvc.yaml Next let's make another volume for the WordPress application. kubectl create -f http://scod.hpedev.io/learn/persistent_storage/yaml/my-wordpress-pvc.yaml Now verify the PersistentVolumes were created successfully. The output should be similar to the following. Note, the volume names will be unique to your deployment. kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-my-wordpress-mariadb-0 Bound pvc-1abdb7d7-374e-45b3-8fa1-534131ec7ec6 50Gi RWO hpe-standard 1m my-wordpress Bound pvc-ff6dc8fd-2b14-4726-b608-be8b27485603 20Gi RWO hpe-standard 1m The above output means that the HPE CSI Driver has successfully provisioned two volumes based upon the default hpe-standard StorageClass . At this stage, the volumes are not attached (exported) to any nodes yet. They will only be attached (exported) to a node once a scheduled workload requests the PersistentVolumeClaims . We will use Helm again to deploy WordPress using the PersistentVolumeClaims we just created. When WordPress is deployed, the volumes will be attached, formatted and mounted. The first step is to add the WordPress chart to Helm. The output should be similar to below. helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm search repo bitnami/wordpress NAME CHART VERSION APP VERSION DESCRIPTION bitnami/wordpress 11.0.13 5.7.2 Web publishing platform for building blogs and ... Next, deploy WordPress by setting the deployment parameter persistence.existingClaim= to the PVC my-wordpress created in the previous step. helm install my-wordpress bitnami/wordpress --version 9.2.1 --set service.type=ClusterIP,wordpressUsername=admin,wordpressPassword=adminpassword,mariadb.mariadbRootPassword=secretpassword,persistence.existingClaim=my-wordpress,allowEmptyPassword=false Check to verify that WordPress and MariaDB were deployed and are in the Running state. This may take a few minutes. Note The Pod names will be unique to your deployment. kubectl get pods NAME READY STATUS RESTARTS AGE my-wordpress-69b7976c85-9mfjv 1/1 Running 0 2m my-wordpress-mariadb-0 1/1 Running 0 2m Finally take a look at the WordPress site. Again, we can use kubectl port-forward to access the WordPress application and verify everything is working correctly. kubectl port-forward svc/my-wordpress 80:80 Note If you have something already running locally on port 80, modify the port-forward to an unused port (i.e. 5000:80). Open a browser on your workstation to http://127.0.0.1 and you should see, \"Hello World!\" . Access the admin console at: http://127.0.0.1/admin using the \"admin/adminpassword\" we specified when deploying the Helm Chart. Create a new blog post so you have data stored in the WordPress application. Happy Blogging! Once ready, hit \" Ctrl+C \" in your terminal to stop the port-forward . Verify the Wordpress application is using the my-wordpress and data-my-wordpress-mariadb-0 PersistentVolumeClaims . kubectl get pods -o=jsonpath='{.items[*].spec.volumes[*].persistentVolumeClaim.claimName}' With the WordPress application using persistent storage for the database and the application data, in the event of a crash of the WordPress application, the PVC will be remounted to the new Pod . Delete the WordPress Pod . kubectl delete pod For example. $ kubectl delete pod my-wordpress-69b7976c85-9mfjv pod \"my-wordpress-69b7976c85-9mfjv\" deleted Now if run kubectl get pods and you should see the WordPress Pod recreating itself with a new name. This may take a few minutes. Output should be similar to the following as the WordPress container is recreating. $ kubectl get pods NAME READY STATUS RESTARTS AGE my-wordpress-mariadb-0 1/1 Running 1 10m my-wordpress-7856df6756-m2nw8 0/1 ContainerCreating 0 33s Once the WordPress Pod is in Ready state, we can verify that the Wordpress application is still using the my-wordpress and data-my-wordpress-mariadb-0 PersistentVolumeClaims . kubectl get pods -o=jsonpath='{.items[*].spec.volumes[*].persistentVolumeClaim.claimName}' And finally, run kubectl port-forward again to see the changes made to the WordPress application survived deleting the application Pod . kubectl port-forward svc/my-wordpress 80:80 Open a browser on your workstation to http://127.0.0.1 and you should see your WordPress site running. This completes the tutorial of using the HPE CSI Driver with HPE storage to create Persistent Volumes within Kubernetes. This is just the beginning of the capabilities of the HPE Storage integrations within Kubernetes. We recommend exploring SCOD further and the specific HPE Storage CSP ( Nimble , Primera, and 3PAR ) to learn more.","title":"Lab 5: Deploying a Stateful Application using HPE Storage (WordPress)"},{"location":"learn/persistent_storage/index.html#optional_lab_advanced_configuration","text":"","title":"Optional Lab: Advanced Configuration"},{"location":"learn/persistent_storage/index.html#configuring_additional_storage_backends","text":"It's not uncommon to have multiple HPE primary storage systems within the same environment, either the same family or different ones. This section walks through the scenario of managing multiple StorageClass and Secret API objects to represent an environment with multiple systems. To view the current Secrets in the hpe-storage Namespace (assuming default names): kubectl -n hpe-storage get secret NAME TYPE DATA AGE custom-secret Opaque 5 10m This Secret is used by the CSI sidecars in the StorageClass to authenticate to a specific backend for CSI operations. In order to add a new Secret or manage access to multiple backends, additional Secrets will need to be created per backend. In the previous steps, if you connected to Nimble Storage, create a new Secret for the Primera array or if you connected to Primera array above then create a Secret for the Nimble Storage. Secret Requirements Each Secret name must be unique. servicePort should be set to 8080 . Using your text editor of choice, create a new Secret , specify the name, Namespace , backend username, backend password and the backend IP address to be used by the CSP and save it as gold-secret.yaml . HPE Nimble Storage apiVersion: v1 kind: Secret metadata: name: gold-secret namespace: hpe-storage stringData: serviceName: nimble-csp-svc servicePort: \"8080\" backend: 192.168.1.2 username: admin password: admin HPE Primera apiVersion: v1 kind: Secret metadata: name: gold-secret namespace: hpe-storage stringData: serviceName: primera3par-csp-svc servicePort: \"8080\" backend: 10.10.0.2 username: 3paradm password: 3pardata Create the Secret using kubectl : kubectl create -f gold-secret.yaml You should now see the Secret in the \"hpe-storage\" Namespace : kubectl -n hpe-storage get secret NAME TYPE DATA AGE gold-secret Opaque 5 1m custom-secret Opaque 5 15m","title":"Configuring additional storage backends"},{"location":"learn/persistent_storage/index.html#create_a_storageclass_with_the_new_secret","text":"To use the new gold-secret , create a new StorageClass using the Secret and the necessary StorageClass parameters. Please see the requirements section of the respective CSP . We will start by creating a StorageClass called hpe-gold . We will use the gold-secret created in the previous step and specify the hpe-storage Namespace where the CSI driver was deployed. Note Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created. HPE Nimble Storage apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-gold provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: gold-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: gold-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: gold-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: gold-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/controller-expand-secret-name: gold-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage performancePolicy: \"SQL Server\" description: \"Volume from HPE CSI Driver\" accessProtocol: iscsi limitIops: \"76800\" allowOverrides: description,limitIops,performancePolicy allowVolumeExpansion: true HPE Primera apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hpe-gold provisioner: csi.hpe.com parameters: csi.storage.k8s.io/fstype: xfs csi.storage.k8s.io/provisioner-secret-name: gold-secret csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage csi.storage.k8s.io/controller-publish-secret-name: gold-secret csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage csi.storage.k8s.io/node-stage-secret-name: gold-secret csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage csi.storage.k8s.io/node-publish-secret-name: gold-secret csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage csi.storage.k8s.io/controller-expand-secret-name: gold-secret csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage cpg: SSD_r6 provisioningType: tpvv accessProtocol: iscsi allowOverrides: cpg,provisioningType allowVolumeExpansion: true We can verify the StorageClass is now available. kubectl get sc NAME PROVISIONER AGE hpe-standard (default) csi.hpe.com 15m hpe-gold csi.hpe.com 1m Note Don't forget to call out the StorageClass explicitly when creating PVCs from non-default StorageClasses .","title":"Create a StorageClass with the new Secret"},{"location":"learn/persistent_storage/index.html#creating_a_persistentvolumeclaim_1","text":"With a StorageClass available, we can request an amount of storage for our application using a PersistentVolumeClaim . Using your text editor of choice, create a new PVC and save it as gold-pvc.yaml . apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gold-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: hpe-gold Create the PersistentVolumeClaim . kubectl create -f gold-pvc.yaml We can see the my-pvc PersistentVolumeClaim was created. kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc Bound pvc-70d5caf8-7558-40e6-a8b7-77dfcf8ddcd8 50Gi RWO hpe-standard 72m gold-pvc Bound pvc-7a74d656-0b14-42a2-9437-e374a5d3bd68 50Gi RWO hpe-gold 1m You can see that the new PVC is using the new StorageClass which is backed by the additional storage backend allowing you to add additional flexibility to your containerized workloads and match the persistent storage requirements to the application.","title":"Creating a PersistentVolumeClaim"},{"location":"learn/persistent_storage/index.html#cleanup_optional","text":"As others will be using this lab at a later time, we can clean up the objects that were deployed during this lab exercise. Note These steps may take a few minutes to complete. Please be patient and don't cancel out the process. Remove WordPress & NGINX deployments. helm uninstall my-wordpress && kubectl delete all --all Delete the PersistentVolumeClaims and related objects. kubectl delete pvc --all && kubectl delete sc --all Remove the HPE CSI Driver for Kubernetes. helm uninstall my-hpe-csi-driver -n hpe-storage It takes a couple minutes to cleanup the objects from the CSI driver. You can check the status: watch kubectl get all -n hpe-storage Once everything is removed, Ctrl+C to exit and finally you can remove the Namespace . kubectl delete ns hpe-storage","title":"Cleanup (Optional)"},{"location":"learn/video_gallery/index.html","text":"Overview \u00b6 Welcome to the Video Gallery. This is a collection of current YouTube assets that pertains to supported HPE primary storage container technologies. Overview CSI driver management Managing multiple HPE storage backends using the HPE CSI Driver Container Storage Providers HPE Alletra 9000 and Primera Using the HPE CSI Driver with HPE Primera Configuring HPE Primera Peer Persistence with the HPE CSI Operator for Kubernetes on Red Hat OpenShift HPE Alletra 5000/6000 and Nimble Storage Using the HPE CSI Driver with HPE Nimble Storage Manage multitenancy at scale with HPE Alletra 5000/6000 and Nimble Storage Provisioning Dynamic Provisioning of Persistent Storage on Kubernetes HPE Developer Hack Shack Workshop: Using the Container Storage Interface Using the HPE CSI Driver to create CSI snapshots and clones Synchronize Volume Snapshots for Distributed Workloads Adapt stateful workloads dynamically with the HPE CSI Driver for Kubernetes Partner Ecosystems Get started with Kasten K10 by Veeam and the HPE CSI Driver Install the HPE CSI Operator for Kubernetes on Red Hat OpenShift Using HPE Primera and HPE Nimble Storage with the VMware Tanzu and vSphere CSI Driver Monitoring, Metering and Diagnostics Get Started with the HPE Storage Array Exporter for Prometheus on Kubernetes Use Cases Lift and Transform Apps and Data with HPE Storage Watch more CSI driver management \u00b6 How to manage the components that surrounds driver deployment. Managing multiple HPE storage backends using the HPE CSI Driver \u00b6 This tutorial talks about managing multiple Secrets and StorageClasses to distinguish different backends. Watch on YouTube Container Storage Providers \u00b6 Each CSP has its own features and perks, learn about the different platforms right here. HPE Alletra 9000 and Primera \u00b6 Using the HPE CSI Driver with HPE Primera \u00b6 This tutorial showcases a few of the HPE Primera specific features with the HPE CSI Driver. Watch on YouTube Configuring HPE Primera Peer Persistence with the HPE CSI Operator for Kubernetes on Red Hat OpenShift \u00b6 Learn how to configure HPE Primera Peer Persistence using the HPE CSI Driver. Watch on YouTube HPE Alletra 5000/6000 and Nimble Storage \u00b6 Using the HPE CSI Driver with HPE Nimble Storage \u00b6 This tutorial showcases a few of the HPE Nimble Storage specific features with the HPE CSI Driver. Watch on YouTube Manage multitenancy at scale with HPE Alletra 5000/6000 and Nimble Storage \u00b6 This lightboard video discusses the advantages of using HPE Alletra 5000/6000 or Nimble Storage to handle multitenancy for storage resources between Kubernetes clusters. Watch on YouTube Provisioning \u00b6 The provisioning topic covers provisioning of storage resources on container orchestrators, such as volumes, snapshots and clones. Dynamic Provisioning of Persistent Storage on Kubernetes \u00b6 Learn the fundamentals of storage provisioning on Kubernetes. Watch on YouTube HPE Developer Hack Shack Workshop: Using the Container Storage Interface \u00b6 An interactive CSI workshop from HPE Discover Virtual Experience. It explains key provisioning concepts, including CSI snapshots and clones, ephemeral inline volumes, raw block volumes and how to use the NFS server provisioner. Watch on YouTube Using the HPE CSI Driver to create CSI snapshots and clones \u00b6 Learn how to use CSI snapshots and clones with the HPE CSI Driver. Watch on YouTube Synchronize Volume Snapshots for Distributed Workloads \u00b6 Explore how to take advantage of the HPE CSI Driver's exclusive features VolumeGroups and SnapshotGroups . Watch on YouTube Adapt stateful workloads dynamically with the HPE CSI Driver for Kubernetes \u00b6 Learn how to use volume mutations to adapt stateful workloads with the HPE CSI Driver. Watch on YouTube Partner Ecosystems \u00b6 Joint solutions with our revered ecosystem partners. Get started with Kasten K10 by Veeam and the HPE CSI Driver \u00b6 This tutorial explains how to deploy the necessary components for Kasten K10 and how to perform snapshots and restores using the HPE CSI Driver. Watch on YouTube Install the HPE CSI Operator for Kubernetes on Red Hat OpenShift \u00b6 This tutorial goes through the steps of installing the HPE CSI Operator on Red Hat OpenShift. Watch on YouTube Using HPE Primera and HPE Nimble Storage with the VMware Tanzu and vSphere CSI Driver \u00b6 This tutorial shows how to use HPE storage with VMware Tanzu as well as how to configure the vSphere CSI Driver for Kubernetes clusters running on VMware leveraging HPE storage. Watch on YouTube Monitoring, Metering and Diagnostics \u00b6 Tutorials and demos showcasing monitoring and troubleshooting. Get Started with the HPE Storage Array Exporter for Prometheus on Kubernetes \u00b6 Learn how to stand up a Prometheus and Grafana environment on Kubernetes and start using the HPE Storage Array Exporter for Prometheus and the HPE CSI Info Metrics Provider for Prometheus to provide Monitoring and Alerting. Watch on YouTube Use Cases \u00b6 Lift and Transform Apps and Data with HPE Storage \u00b6 This lightboard video discusses how to lift and transform applications running on traditional infrastructure over to Kubernetes using the HPE CSI Driver. Learn the details on what makes this possible in this HPE Developer blog post . Watch on YouTube Watch more \u00b6 A curated playlist of content related to HPE primary storage and containers is available on YouTube .","title":"Video Gallery"},{"location":"learn/video_gallery/index.html#overview","text":"Welcome to the Video Gallery. This is a collection of current YouTube assets that pertains to supported HPE primary storage container technologies. Overview CSI driver management Managing multiple HPE storage backends using the HPE CSI Driver Container Storage Providers HPE Alletra 9000 and Primera Using the HPE CSI Driver with HPE Primera Configuring HPE Primera Peer Persistence with the HPE CSI Operator for Kubernetes on Red Hat OpenShift HPE Alletra 5000/6000 and Nimble Storage Using the HPE CSI Driver with HPE Nimble Storage Manage multitenancy at scale with HPE Alletra 5000/6000 and Nimble Storage Provisioning Dynamic Provisioning of Persistent Storage on Kubernetes HPE Developer Hack Shack Workshop: Using the Container Storage Interface Using the HPE CSI Driver to create CSI snapshots and clones Synchronize Volume Snapshots for Distributed Workloads Adapt stateful workloads dynamically with the HPE CSI Driver for Kubernetes Partner Ecosystems Get started with Kasten K10 by Veeam and the HPE CSI Driver Install the HPE CSI Operator for Kubernetes on Red Hat OpenShift Using HPE Primera and HPE Nimble Storage with the VMware Tanzu and vSphere CSI Driver Monitoring, Metering and Diagnostics Get Started with the HPE Storage Array Exporter for Prometheus on Kubernetes Use Cases Lift and Transform Apps and Data with HPE Storage Watch more","title":"Overview"},{"location":"learn/video_gallery/index.html#csi_driver_management","text":"How to manage the components that surrounds driver deployment.","title":"CSI driver management"},{"location":"learn/video_gallery/index.html#managing_multiple_hpe_storage_backends_using_the_hpe_csi_driver","text":"This tutorial talks about managing multiple Secrets and StorageClasses to distinguish different backends. Watch on YouTube","title":"Managing multiple HPE storage backends using the HPE CSI Driver"},{"location":"learn/video_gallery/index.html#container_storage_providers","text":"Each CSP has its own features and perks, learn about the different platforms right here.","title":"Container Storage Providers"},{"location":"learn/video_gallery/index.html#hpe_alletra_9000_and_primera","text":"","title":"HPE Alletra 9000 and Primera"},{"location":"learn/video_gallery/index.html#using_the_hpe_csi_driver_with_hpe_primera","text":"This tutorial showcases a few of the HPE Primera specific features with the HPE CSI Driver. Watch on YouTube","title":"Using the HPE CSI Driver with HPE Primera"},{"location":"learn/video_gallery/index.html#configuring_hpe_primera_peer_persistence_with_the_hpe_csi_operator_for_kubernetes_on_red_hat_openshift","text":"Learn how to configure HPE Primera Peer Persistence using the HPE CSI Driver. Watch on YouTube","title":"Configuring HPE Primera Peer Persistence with the HPE CSI Operator for Kubernetes on Red Hat OpenShift"},{"location":"learn/video_gallery/index.html#hpe_alletra_50006000_and_nimble_storage","text":"","title":"HPE Alletra 5000/6000 and Nimble Storage"},{"location":"learn/video_gallery/index.html#using_the_hpe_csi_driver_with_hpe_nimble_storage","text":"This tutorial showcases a few of the HPE Nimble Storage specific features with the HPE CSI Driver. Watch on YouTube","title":"Using the HPE CSI Driver with HPE Nimble Storage"},{"location":"learn/video_gallery/index.html#manage_multitenancy_at_scale_with_hpe_alletra_50006000_and_nimble_storage","text":"This lightboard video discusses the advantages of using HPE Alletra 5000/6000 or Nimble Storage to handle multitenancy for storage resources between Kubernetes clusters. Watch on YouTube","title":"Manage multitenancy at scale with HPE Alletra 5000/6000 and Nimble Storage"},{"location":"learn/video_gallery/index.html#provisioning","text":"The provisioning topic covers provisioning of storage resources on container orchestrators, such as volumes, snapshots and clones.","title":"Provisioning"},{"location":"learn/video_gallery/index.html#dynamic_provisioning_of_persistent_storage_on_kubernetes","text":"Learn the fundamentals of storage provisioning on Kubernetes. Watch on YouTube","title":"Dynamic Provisioning of Persistent Storage on Kubernetes"},{"location":"learn/video_gallery/index.html#hpe_developer_hack_shack_workshop_using_the_container_storage_interface","text":"An interactive CSI workshop from HPE Discover Virtual Experience. It explains key provisioning concepts, including CSI snapshots and clones, ephemeral inline volumes, raw block volumes and how to use the NFS server provisioner. Watch on YouTube","title":"HPE Developer Hack Shack Workshop: Using the Container Storage Interface"},{"location":"learn/video_gallery/index.html#using_the_hpe_csi_driver_to_create_csi_snapshots_and_clones","text":"Learn how to use CSI snapshots and clones with the HPE CSI Driver. Watch on YouTube","title":"Using the HPE CSI Driver to create CSI snapshots and clones"},{"location":"learn/video_gallery/index.html#synchronize_volume_snapshots_for_distributed_workloads","text":"Explore how to take advantage of the HPE CSI Driver's exclusive features VolumeGroups and SnapshotGroups . Watch on YouTube","title":"Synchronize Volume Snapshots for Distributed Workloads"},{"location":"learn/video_gallery/index.html#adapt_stateful_workloads_dynamically_with_the_hpe_csi_driver_for_kubernetes","text":"Learn how to use volume mutations to adapt stateful workloads with the HPE CSI Driver. Watch on YouTube","title":"Adapt stateful workloads dynamically with the HPE CSI Driver for Kubernetes"},{"location":"learn/video_gallery/index.html#partner_ecosystems","text":"Joint solutions with our revered ecosystem partners.","title":"Partner Ecosystems"},{"location":"learn/video_gallery/index.html#get_started_with_kasten_k10_by_veeam_and_the_hpe_csi_driver","text":"This tutorial explains how to deploy the necessary components for Kasten K10 and how to perform snapshots and restores using the HPE CSI Driver. Watch on YouTube","title":"Get started with Kasten K10 by Veeam and the HPE CSI Driver"},{"location":"learn/video_gallery/index.html#install_the_hpe_csi_operator_for_kubernetes_on_red_hat_openshift","text":"This tutorial goes through the steps of installing the HPE CSI Operator on Red Hat OpenShift. Watch on YouTube","title":"Install the HPE CSI Operator for Kubernetes on Red Hat OpenShift"},{"location":"learn/video_gallery/index.html#using_hpe_primera_and_hpe_nimble_storage_with_the_vmware_tanzu_and_vsphere_csi_driver","text":"This tutorial shows how to use HPE storage with VMware Tanzu as well as how to configure the vSphere CSI Driver for Kubernetes clusters running on VMware leveraging HPE storage. Watch on YouTube","title":"Using HPE Primera and HPE Nimble Storage with the VMware Tanzu and vSphere CSI Driver"},{"location":"learn/video_gallery/index.html#monitoring_metering_and_diagnostics","text":"Tutorials and demos showcasing monitoring and troubleshooting.","title":"Monitoring, Metering and Diagnostics"},{"location":"learn/video_gallery/index.html#get_started_with_the_hpe_storage_array_exporter_for_prometheus_on_kubernetes","text":"Learn how to stand up a Prometheus and Grafana environment on Kubernetes and start using the HPE Storage Array Exporter for Prometheus and the HPE CSI Info Metrics Provider for Prometheus to provide Monitoring and Alerting. Watch on YouTube","title":"Get Started with the HPE Storage Array Exporter for Prometheus on Kubernetes"},{"location":"learn/video_gallery/index.html#use_cases","text":"","title":"Use Cases"},{"location":"learn/video_gallery/index.html#lift_and_transform_apps_and_data_with_hpe_storage","text":"This lightboard video discusses how to lift and transform applications running on traditional infrastructure over to Kubernetes using the HPE CSI Driver. Learn the details on what makes this possible in this HPE Developer blog post . Watch on YouTube","title":"Lift and Transform Apps and Data with HPE Storage"},{"location":"learn/video_gallery/index.html#watch_more","text":"A curated playlist of content related to HPE primary storage and containers is available on YouTube .","title":"Watch more"},{"location":"legacy/index.html","text":"Overview \u00b6 These integrations are either already deprecated or being phased out. Please work with your HPE representative if you think you need to run any of these plugins and drivers. Container Storage Providers \u00b6 HPE Cloud Volumes Legacy FlexVolume drivers \u00b6 Container Provider: Nimble and CV Ansible installer for 3PAR/Primera History: Dory and Doryd Docker Volume plugins \u00b6 HPE Cloud Volumes HPE Nimble Storage","title":"Docker, FlexVolume and CSPs"},{"location":"legacy/index.html#overview","text":"These integrations are either already deprecated or being phased out. Please work with your HPE representative if you think you need to run any of these plugins and drivers.","title":"Overview"},{"location":"legacy/index.html#container_storage_providers","text":"HPE Cloud Volumes","title":"Container Storage Providers"},{"location":"legacy/index.html#legacy_flexvolume_drivers","text":"Container Provider: Nimble and CV Ansible installer for 3PAR/Primera History: Dory and Doryd","title":"Legacy FlexVolume drivers"},{"location":"legacy/index.html#docker_volume_plugins","text":"HPE Cloud Volumes HPE Nimble Storage","title":"Docker Volume plugins"},{"location":"legal/contributing/index.html","text":"Introduction \u00b6 We welcome and encourage community contributions to SCOD. Where to start? \u00b6 The best way to directly collaborate with the project contributors is through GitHub: https://github.com/hpe-storage/scod If you want to contribute to our documentation by either fixing a typo or creating a page, please open a GitHub pull request . If you want to raise an issue such as a defect, an enhancement request or a general issue, please open a GitHub issue . Before you start writing, we recommend discussing your plans through a GitHub issue, especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your contribution, and help you find out if someone else is working on the same thing. Note that all submissions from all contributors get reviewed. After a pull request is made, other contributors will offer feedback. If the patch passes review, a maintainer will accept it with a comment. When a pull request fails review, the author is expected to update the pull request to address the issue until it passes review and the pull request merges successfully. At least one review from a maintainer is required for all patches. Developer's Certificate of Origin \u00b6 All contributions must include acceptance of the DCO: Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 660 York Street, Suite 102, San Francisco, CA 94110 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. Sign your work \u00b6 To accept the DCO, simply add this line to each commit message with your name and email address ( git commit -s will do this for you): Signed-off-by: Jane Example For legal reasons, no anonymous or pseudonymous contributions are accepted. Submitting Pull Requests \u00b6 We encourage and support contributions from the community. No fix is too small. We strive to process all pull requests as soon as possible and with constructive feedback. If your pull request is not accepted at first, please try again after addressing the feedback you received. To make a pull request you will need a GitHub account. For help, see GitHub's documentation on forking and pull requests.","title":"Contributing"},{"location":"legal/contributing/index.html#introduction","text":"We welcome and encourage community contributions to SCOD.","title":"Introduction"},{"location":"legal/contributing/index.html#where_to_start","text":"The best way to directly collaborate with the project contributors is through GitHub: https://github.com/hpe-storage/scod If you want to contribute to our documentation by either fixing a typo or creating a page, please open a GitHub pull request . If you want to raise an issue such as a defect, an enhancement request or a general issue, please open a GitHub issue . Before you start writing, we recommend discussing your plans through a GitHub issue, especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your contribution, and help you find out if someone else is working on the same thing. Note that all submissions from all contributors get reviewed. After a pull request is made, other contributors will offer feedback. If the patch passes review, a maintainer will accept it with a comment. When a pull request fails review, the author is expected to update the pull request to address the issue until it passes review and the pull request merges successfully. At least one review from a maintainer is required for all patches.","title":"Where to start?"},{"location":"legal/contributing/index.html#developers_certificate_of_origin","text":"All contributions must include acceptance of the DCO: Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 660 York Street, Suite 102, San Francisco, CA 94110 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.","title":"Developer's Certificate of Origin"},{"location":"legal/contributing/index.html#sign_your_work","text":"To accept the DCO, simply add this line to each commit message with your name and email address ( git commit -s will do this for you): Signed-off-by: Jane Example For legal reasons, no anonymous or pseudonymous contributions are accepted.","title":"Sign your work"},{"location":"legal/contributing/index.html#submitting_pull_requests","text":"We encourage and support contributions from the community. No fix is too small. We strive to process all pull requests as soon as possible and with constructive feedback. If your pull request is not accepted at first, please try again after addressing the feedback you received. To make a pull request you will need a GitHub account. For help, see GitHub's documentation on forking and pull requests.","title":"Submitting Pull Requests"},{"location":"legal/license/index.html","text":"Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License. \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\" \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.","title":"License"},{"location":"legal/notices/index.html","text":"Attributions for third party components. HPE CSI Info Metrics Provider for Prometheus HPE CSI Info Metrics Provider for Prometheus \u00b6 HPE CSI Info Metrics Provider for Prometheus Copyright 2020-2024 Hewlett Packard Enterprise Development LP This product contains the following third party componenets: Google Cloud Go cloud.google.com/go Licensed under the Apache-2.0 license mtl dmitri.shuralyov.com/gpu/mtl Licensed under the BSD-3-Clause license go-autorest github.com/Azure/go-autorest Licensed under the Apache-2.0 license Tom's Obvious Minimal Language github.com/BurntSushi/toml Licensed under the MIT license X Go Binding github.com/BurntSushi/xgb Licensed under the BSD-3-Clause license Gzip Handler github.com/NYTimes/gziphandler Licensed under the Apache-2.0 license Purell github.com/PuerkitoBio/purell Licensed under the BSD-3-Clause license urlesc github.com/PuerkitoBio/urlesc Licensed under the BSD-3-Clause license text/template github.com/alecthomas/template Licensed under the BSD-3-Clause license Units github.com/alecthomas/units Licensed under the MIT license govalidator github.com/asaskevich/govalidator Licensed under the MIT license Perks for Go github.com/beorn7/perks Licensed under the MIT license OpenCensus Proto github.com/census-instrumentation/opencensus-proto Licensed under the Apache-2.0 license xxhash github.com/cespare/xxhash/v2 Licensed under the MIT license Logex github.com/chzyer/logex Licensed under the MIT license ReadLine github.com/chzyer/readline Licensed under the MIT license test github.com/chzyer/test Licensed under the MIT license misspell github.com/client9/misspell Licensed under the MIT license pty github.com/creack/pty Licensed under the MIT license go-spew github.com/davecgh/go-spew Licensed under the ISC license docopt-go github.com/docopt/docopt-go Licensed under the MIT license goproxy github.com/elazarl/goproxy Licensed under the BSD-3-Clause license go-restful github.com/emicklei/go-restful Licensed under the MIT license control-plane github.com/envoyproxy/go-control-plane Licensed under the Apache-2.0 license protec-gen-validate (PGV) github.com/envoyproxy/protoc-gen-validate Licensed under the Apache-2.0 license JSON-Patch github.com/evanphx/json-patch Licensed under the BSD-3-Clause license jwt-go github.com/form3tech-oss/jwt-go Licensed under the MIT license File system notifications for Go github.com/fsnotify/fsnotify Licensed under the BSD-3-Clause license GLFW for Go github.com/go-gl/glfw Licensed under the BSD-3-Clause license Go kit github.com/go-kit/kit Licensed under the MIT license package log github.com/go-kit/log Licensed under the MIT license logfmt github.com/go-logfmt/logfmt Licensed under the MIT license logr, A minimal logging API for Go github.com/go-logr/logr Licensed under the Apache-2.0 license gojsonpointer github.com/go-openapi/jsonpointer Licensed under the Apache-2.0 license gojsonreference github.com/go-openapi/jsonreference Licensed under the Apache-2.0 license OAI object model github.com/go-openapi/spec Licensed under the Apache-2.0 license Swag github.com/go-openapi/swag Licensed under the Apache-2.0 license stack github.com/go-stack/stack Licensed under the MIT license Protocol Buffers for Go with Gadgets github.com/gogo/protobuf Licensed under the BSD-3-Clause license glog github.com/golang/glog Licensed under the Apache-2.0 license groupcache github.com/golang/groupcache Licensed under the Apache-2.0 license gomock github.com/golang/mock Licensed under the Apache-2.0 license Go support for Protocol Buffers github.com/golang/protobuf Licensed under the BSD-3-Clause license BTree implementation for Go github.com/google/btree Licensed under the Apache-2.0 license Package for equality of Go values github.com/google/go-cmp Licensed under the BSD-3-Clause license gofuzz github.com/google/gofuzz Licensed under the Apache-2.0 license Martian Proxy github.com/google/martian Licensed under the Apache-2.0 license pprof github.com/google/pprof Licensed under the Apache-2.0 license renameio github.com/google/renameio Licensed under the Apache-2.0 license uuid github.com/google/uuid Licensed under the BSD-3-Clause license Google API Extensions for Go github.com/googleapis/gax-go/v2 Licensed under the BSD-3-Clause license gnostic github.com/googleapis/gnostic Licensed under the Apache-2.0 license Gorilla WebSocket github.com/gorilla/websocket Licensed under the BSD-2-Clause license httpcache github.com/gregjones/httpcache Licensed under the MIT license golang-lru github.com/hashicorp/golang-lru Licensed under the MPL-2.0 license Go package for tail-ing files github.com/hpcloud/tail Licensed under the MIT license demangle github.com/ianlancetaylor/demangle Licensed under the BSD-3-Clause license Mergo github.com/imdario/mergo Licensed under the BSD-3-Clause license Backoff github.com/jpillora/backoff Licensed under the MIT license json-iterator github.com/json-iterator/go Licensed under the MIT license go-junit-report github.com/jstemmer/go-junit-report Licensed under the MIT license errcheck github.com/kisielk/errcheck Licensed under the MIT license gotool github.com/kisielk/gotool Licensed under the MIT license Windows Terminal Sequences github.com/konsorten/go-windows-terminal-sequences Licensed under the MIT license logfmt github.com/kr/logfmt Licensed under the MIT license pretty github.com/kr/pretty Licensed under the MIT license pty github.com/kr/pty Licensed under the MIT license text github.com/kr/text Licensed under the MIT license easyjson github.com/mailru/easyjson Licensed under the MIT license golang protobuf extensions github.com/matttproud/golang_protobuf_extensions Licensed under the Apache-2.0 license with the notice: Copyright 2012 Matt T. Proud (matt.proud@gmail.com) mapstructure github.com/mitchellh/mapstructure Licensed under the MIT license SpdyStream github.com/moby/spdystream Licensed under the Apache-2.0 license with the notice: SpdyStream Copyright 2014-2021 Docker Inc. This product includes software developed at Docker Inc. (https://www.docker.com/). concurrent github.com/modern-go/concurrent Licensed under the Apache-2.0 license reflect2 github.com/modern-go/reflect2 Licensed under the Apache-2.0 license goautoneg github.com/munnerz/goautoneg Licensed under the BSD-3-Clause license Go tracing and monitoring (Prometheus) for net.Conn github.com/mwitkow/go-conntrack Licensed under the Apache-2.0 license Data Flow Rate Control github.com/mxk/go-flowrate Licensed under the BSD-3-Clause license pretty github.com/niemeyer/pretty Licensed under the MIT license Ginkgo github.com/onsi/ginkgo Licensed under the MIT license Gomega github.com/onsi/gomega Licensed under the MIT license diskv github.com/peterbourgon/diskv Licensed under the MIT license errors github.com/pkg/errors Licensed under the BSD-2-Clause license go-difflib github.com/pmezard/go-difflib Licensed under the BSD-3-Clause license Prometheus Go client library github.com/prometheus/client_golang Licensed under the Apache-2.0 license with the following notice: Prometheus instrumentation library for Go applications Copyright 2012-2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). The following components are included in this product: perks - a fork of https://github.com/bmizerany/perks https://github.com/beorn7/perks Copyright 2013-2015 Blake Mizerany, Bj\u00f6rn Rabenstein See https://github.com/beorn7/perks/blob/master/README.md for license details. Go support for Protocol Buffers - Google's data interchange format http://github.com/golang/protobuf/ Copyright 2010 The Go Authors See source code for license details. Support for streaming Protocol Buffer messages for the Go language (golang). https://github.com/matttproud/golang_protobuf_extensions Copyright 2013 Matt T. Proud Licensed under the Apache License, Version 2.0 Prometheus Go client model github.com/prometheus/client_model Licensed under the Apache-2.0 license with the following notice: Data model artifacts for Prometheus. Copyright 2012-2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). Common github.com/prometheus/common Licensed under the Apache-2.0 license with the following notice: Common libraries shared by Prometheus Go components. Copyright 2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). procfs github.com/prometheus/procfs Licensed under the Apache-2.0 license with the following notice: procfs provides functions to retrieve system, kernel and process metrics from the pseudo-filesystem proc. Copyright 2014-2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). go-internal github.com/rogpeppe/go-internal Licensed under the BSD-3-Clause license Logrus github.com/sirupsen/logrus Licensed under the MIT license AFERO github.com/spf13/afero Licensed under the Apache-2.0 license pflag github.com/spf13/pflag Licensed under the BSD-3-Clause license Objx github.com/stretchr/objx Licensed under the MIT license Testify github.com/stretchr/testify Licensed under the MIT license goldmark github.com/yuin/goldmark Licensed under the MIT license OpenCensus Libraries for Go go.opencensus.io Licensed under the Apache-2.0 license Go Cryptography golang.org/x/crypto Licensed under the BSD-3-Clause license exp golang.org/x/exp Licensed under the BSD-3-Clause license Go Images golang.org/x/image Licensed under the BSD-3-Clause license lint golang.org/x/lint Licensed under the BSD-3-Clause license Go support for Mobile devices golang.org/x/mobile Licensed under the BSD-3-Clause license mod golang.org/x/mod Licensed under the BSD-3-Clause license Go Networking golang.org/x/net Licensed under the BSD-3-Clause license OAuth2 for Go golang.org/x/oauth2 Licensed under the BSD-3-Clause license Go Sync golang.org/x/sync Licensed under the BSD-3-Clause license sys golang.org/x/sys Licensed under the BSD-3-Clause license Go terminal/console support golang.org/x/term Licensed under the BSD-3-Clause license Go Text golang.org/x/text Licensed under the BSD-3-Clause license Go Time golang.org/x/time Licensed under the BSD-3-Clause license Go Tools golang.org/x/tools Licensed under the BSD-3-Clause license xerrors golang.org/x/xerrors Licensed under the BSD-3-Clause license Google APIs Client Library for Go google.golang.org/api Licensed under the BSD-3-Clause license Go App Engine packages google.golang.org/appengine Licensed under the Apache-2.0 license Go generated proto packages google.golang.org/genproto Licensed under the Apache-2.0 license gRPC-Go google.golang.org/grpc Licensed under the Apache-2.0 license with the following notice: Copyright 2014 gRPC authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Go support for Protocol Buffers google.golang.org/protobuf Licensed under the BSD-3-Clause license Kingpin - A Go (golang) command line and flag parser gopkg.in/alecthomas/kingpin.v2 Licensed under the MIT license check gopkg.in/check.v1 Licensed under the BSD-3-Clause license errgo gopkg.in/errgo.v2 Licensed under the BSD-3-Clause license File system notifications for Go gopkg.in/fsnotify.v1 Licensed under the BSD-3-Clause license inf gopkg.in/inf.v0 Licensed under the BSD-3-Clause license lumberjack gopkg.in/natefinch/lumberjack.v2 Licensed under the MIT license tomb gopkg.in/tomb.v1 Licensed under the BSD-3-Clause license gopkg.in/yaml.v2 Licensed under the Apache-2.0 license with the following notice: Copyright 2011-2016 Canonical Ltd. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. YAML support for the Go language gopkg.in/yaml.v3 Licensed under the Apache-2.0 license with the following notice: Copyright 2011-2016 Canonical Ltd. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. go-tools honnef.co/go/tools Licensed under the MIT license api k8s.io/api Licensed under the Apache-2.0 license apimachinery k8s.io/apimachinery Licensed under the Apache-2.0 license client-go k8s.io/client-go Licensed under the Apache-2.0 license gengo k8s.io/gengo Licensed under the Apache-2.0 license klog k8s.io/klog/v2 Licensed under the Apache-2.0 license kube-openapi k8s.io/kube-openapi Licensed under the Apache-2.0 license utils k8s.io/utils Licensed under the Apache-2.0 license binaryregexp rsc.io/binaryregexp Licensed under the BSD-3-Clause license quote rsc.io/quote/v3 Licensed under the BSD-3-Clause license sampler rsc.io/sampler Licensed under the BSD-3-Clause license Structured Merge and Diff sigs.k8s.io/structured-merge-diff/v4 Licensed under the Apache-2.0 license YAML marshaling and unmarshaling support for Go sigs.k8s.io/yaml Licensed under the MIT license Licenses: MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License. \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\" \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: You must give any other recipients of the Work or Derivative Works a copy of this License; and You must cause any modified files to carry prominent notices stating that You changed the files; and You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS BSD-3-Clause License Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. BSD-2-Clause License Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ISC License Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Mozilla Public License, version 2.0 1. Definitions 1.1. \"Contributor\" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 1.2. \"Contributor Version\" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. 1.3. \"Contribution\" means Covered Software of a particular Contributor. 1.4. \"Covered Software\" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 1.5. \"Incompatible With Secondary Licenses\" means a. that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or b. that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 1.6. \"Executable Form\" means any form of the work other than Source Code Form. 1.7. \"Larger Work\" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 1.8. \"License\" means this document. 1.9. \"Licensable\" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 1.10. \"Modifications\" means any of the following: a. any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or b. any new file in Source Code Form that contains any Covered Software. 1.11. \"Patent Claims\" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 1.12. \"Secondary License\" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 1.13. \"Source Code Form\" means the form of the work preferred for making modifications. 1.14. \"You\" (or \"Your\") means an individual or a legal entity exercising rights under this License. For legal entities, \"You\" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, \"control\" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. 2. License Grants and Conditions 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: a. under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and b. under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: a. for any code that a Contributor has removed from Covered Software; or b. for infringements caused by: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or c. under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 3. Responsibilities 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: a. such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and b. You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. 4. Inability to Comply Due to Statute or Regulation If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. 5. Termination 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. 6. Disclaimer of Warranty Covered Software is provided under this License on an \"as is\" basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer. 7. Limitation of Liability Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party's negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. 8. Litigation Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. 9. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. 10. Versions of the License 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. Exhibit B - \"Incompatible With Secondary Licenses\" Notice This Source Code Form is \"Incompatible With Secondary Licenses\", as defined by the Mozilla Public License, v. 2.0.","title":"Notices"},{"location":"legal/notices/index.html#hpe_csi_info_metrics_provider_for_prometheus","text":"HPE CSI Info Metrics Provider for Prometheus Copyright 2020-2024 Hewlett Packard Enterprise Development LP This product contains the following third party componenets: Google Cloud Go cloud.google.com/go Licensed under the Apache-2.0 license mtl dmitri.shuralyov.com/gpu/mtl Licensed under the BSD-3-Clause license go-autorest github.com/Azure/go-autorest Licensed under the Apache-2.0 license Tom's Obvious Minimal Language github.com/BurntSushi/toml Licensed under the MIT license X Go Binding github.com/BurntSushi/xgb Licensed under the BSD-3-Clause license Gzip Handler github.com/NYTimes/gziphandler Licensed under the Apache-2.0 license Purell github.com/PuerkitoBio/purell Licensed under the BSD-3-Clause license urlesc github.com/PuerkitoBio/urlesc Licensed under the BSD-3-Clause license text/template github.com/alecthomas/template Licensed under the BSD-3-Clause license Units github.com/alecthomas/units Licensed under the MIT license govalidator github.com/asaskevich/govalidator Licensed under the MIT license Perks for Go github.com/beorn7/perks Licensed under the MIT license OpenCensus Proto github.com/census-instrumentation/opencensus-proto Licensed under the Apache-2.0 license xxhash github.com/cespare/xxhash/v2 Licensed under the MIT license Logex github.com/chzyer/logex Licensed under the MIT license ReadLine github.com/chzyer/readline Licensed under the MIT license test github.com/chzyer/test Licensed under the MIT license misspell github.com/client9/misspell Licensed under the MIT license pty github.com/creack/pty Licensed under the MIT license go-spew github.com/davecgh/go-spew Licensed under the ISC license docopt-go github.com/docopt/docopt-go Licensed under the MIT license goproxy github.com/elazarl/goproxy Licensed under the BSD-3-Clause license go-restful github.com/emicklei/go-restful Licensed under the MIT license control-plane github.com/envoyproxy/go-control-plane Licensed under the Apache-2.0 license protec-gen-validate (PGV) github.com/envoyproxy/protoc-gen-validate Licensed under the Apache-2.0 license JSON-Patch github.com/evanphx/json-patch Licensed under the BSD-3-Clause license jwt-go github.com/form3tech-oss/jwt-go Licensed under the MIT license File system notifications for Go github.com/fsnotify/fsnotify Licensed under the BSD-3-Clause license GLFW for Go github.com/go-gl/glfw Licensed under the BSD-3-Clause license Go kit github.com/go-kit/kit Licensed under the MIT license package log github.com/go-kit/log Licensed under the MIT license logfmt github.com/go-logfmt/logfmt Licensed under the MIT license logr, A minimal logging API for Go github.com/go-logr/logr Licensed under the Apache-2.0 license gojsonpointer github.com/go-openapi/jsonpointer Licensed under the Apache-2.0 license gojsonreference github.com/go-openapi/jsonreference Licensed under the Apache-2.0 license OAI object model github.com/go-openapi/spec Licensed under the Apache-2.0 license Swag github.com/go-openapi/swag Licensed under the Apache-2.0 license stack github.com/go-stack/stack Licensed under the MIT license Protocol Buffers for Go with Gadgets github.com/gogo/protobuf Licensed under the BSD-3-Clause license glog github.com/golang/glog Licensed under the Apache-2.0 license groupcache github.com/golang/groupcache Licensed under the Apache-2.0 license gomock github.com/golang/mock Licensed under the Apache-2.0 license Go support for Protocol Buffers github.com/golang/protobuf Licensed under the BSD-3-Clause license BTree implementation for Go github.com/google/btree Licensed under the Apache-2.0 license Package for equality of Go values github.com/google/go-cmp Licensed under the BSD-3-Clause license gofuzz github.com/google/gofuzz Licensed under the Apache-2.0 license Martian Proxy github.com/google/martian Licensed under the Apache-2.0 license pprof github.com/google/pprof Licensed under the Apache-2.0 license renameio github.com/google/renameio Licensed under the Apache-2.0 license uuid github.com/google/uuid Licensed under the BSD-3-Clause license Google API Extensions for Go github.com/googleapis/gax-go/v2 Licensed under the BSD-3-Clause license gnostic github.com/googleapis/gnostic Licensed under the Apache-2.0 license Gorilla WebSocket github.com/gorilla/websocket Licensed under the BSD-2-Clause license httpcache github.com/gregjones/httpcache Licensed under the MIT license golang-lru github.com/hashicorp/golang-lru Licensed under the MPL-2.0 license Go package for tail-ing files github.com/hpcloud/tail Licensed under the MIT license demangle github.com/ianlancetaylor/demangle Licensed under the BSD-3-Clause license Mergo github.com/imdario/mergo Licensed under the BSD-3-Clause license Backoff github.com/jpillora/backoff Licensed under the MIT license json-iterator github.com/json-iterator/go Licensed under the MIT license go-junit-report github.com/jstemmer/go-junit-report Licensed under the MIT license errcheck github.com/kisielk/errcheck Licensed under the MIT license gotool github.com/kisielk/gotool Licensed under the MIT license Windows Terminal Sequences github.com/konsorten/go-windows-terminal-sequences Licensed under the MIT license logfmt github.com/kr/logfmt Licensed under the MIT license pretty github.com/kr/pretty Licensed under the MIT license pty github.com/kr/pty Licensed under the MIT license text github.com/kr/text Licensed under the MIT license easyjson github.com/mailru/easyjson Licensed under the MIT license golang protobuf extensions github.com/matttproud/golang_protobuf_extensions Licensed under the Apache-2.0 license with the notice: Copyright 2012 Matt T. Proud (matt.proud@gmail.com) mapstructure github.com/mitchellh/mapstructure Licensed under the MIT license SpdyStream github.com/moby/spdystream Licensed under the Apache-2.0 license with the notice: SpdyStream Copyright 2014-2021 Docker Inc. This product includes software developed at Docker Inc. (https://www.docker.com/). concurrent github.com/modern-go/concurrent Licensed under the Apache-2.0 license reflect2 github.com/modern-go/reflect2 Licensed under the Apache-2.0 license goautoneg github.com/munnerz/goautoneg Licensed under the BSD-3-Clause license Go tracing and monitoring (Prometheus) for net.Conn github.com/mwitkow/go-conntrack Licensed under the Apache-2.0 license Data Flow Rate Control github.com/mxk/go-flowrate Licensed under the BSD-3-Clause license pretty github.com/niemeyer/pretty Licensed under the MIT license Ginkgo github.com/onsi/ginkgo Licensed under the MIT license Gomega github.com/onsi/gomega Licensed under the MIT license diskv github.com/peterbourgon/diskv Licensed under the MIT license errors github.com/pkg/errors Licensed under the BSD-2-Clause license go-difflib github.com/pmezard/go-difflib Licensed under the BSD-3-Clause license Prometheus Go client library github.com/prometheus/client_golang Licensed under the Apache-2.0 license with the following notice: Prometheus instrumentation library for Go applications Copyright 2012-2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). The following components are included in this product: perks - a fork of https://github.com/bmizerany/perks https://github.com/beorn7/perks Copyright 2013-2015 Blake Mizerany, Bj\u00f6rn Rabenstein See https://github.com/beorn7/perks/blob/master/README.md for license details. Go support for Protocol Buffers - Google's data interchange format http://github.com/golang/protobuf/ Copyright 2010 The Go Authors See source code for license details. Support for streaming Protocol Buffer messages for the Go language (golang). https://github.com/matttproud/golang_protobuf_extensions Copyright 2013 Matt T. Proud Licensed under the Apache License, Version 2.0 Prometheus Go client model github.com/prometheus/client_model Licensed under the Apache-2.0 license with the following notice: Data model artifacts for Prometheus. Copyright 2012-2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). Common github.com/prometheus/common Licensed under the Apache-2.0 license with the following notice: Common libraries shared by Prometheus Go components. Copyright 2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). procfs github.com/prometheus/procfs Licensed under the Apache-2.0 license with the following notice: procfs provides functions to retrieve system, kernel and process metrics from the pseudo-filesystem proc. Copyright 2014-2015 The Prometheus Authors This product includes software developed at SoundCloud Ltd. (http://soundcloud.com/). go-internal github.com/rogpeppe/go-internal Licensed under the BSD-3-Clause license Logrus github.com/sirupsen/logrus Licensed under the MIT license AFERO github.com/spf13/afero Licensed under the Apache-2.0 license pflag github.com/spf13/pflag Licensed under the BSD-3-Clause license Objx github.com/stretchr/objx Licensed under the MIT license Testify github.com/stretchr/testify Licensed under the MIT license goldmark github.com/yuin/goldmark Licensed under the MIT license OpenCensus Libraries for Go go.opencensus.io Licensed under the Apache-2.0 license Go Cryptography golang.org/x/crypto Licensed under the BSD-3-Clause license exp golang.org/x/exp Licensed under the BSD-3-Clause license Go Images golang.org/x/image Licensed under the BSD-3-Clause license lint golang.org/x/lint Licensed under the BSD-3-Clause license Go support for Mobile devices golang.org/x/mobile Licensed under the BSD-3-Clause license mod golang.org/x/mod Licensed under the BSD-3-Clause license Go Networking golang.org/x/net Licensed under the BSD-3-Clause license OAuth2 for Go golang.org/x/oauth2 Licensed under the BSD-3-Clause license Go Sync golang.org/x/sync Licensed under the BSD-3-Clause license sys golang.org/x/sys Licensed under the BSD-3-Clause license Go terminal/console support golang.org/x/term Licensed under the BSD-3-Clause license Go Text golang.org/x/text Licensed under the BSD-3-Clause license Go Time golang.org/x/time Licensed under the BSD-3-Clause license Go Tools golang.org/x/tools Licensed under the BSD-3-Clause license xerrors golang.org/x/xerrors Licensed under the BSD-3-Clause license Google APIs Client Library for Go google.golang.org/api Licensed under the BSD-3-Clause license Go App Engine packages google.golang.org/appengine Licensed under the Apache-2.0 license Go generated proto packages google.golang.org/genproto Licensed under the Apache-2.0 license gRPC-Go google.golang.org/grpc Licensed under the Apache-2.0 license with the following notice: Copyright 2014 gRPC authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Go support for Protocol Buffers google.golang.org/protobuf Licensed under the BSD-3-Clause license Kingpin - A Go (golang) command line and flag parser gopkg.in/alecthomas/kingpin.v2 Licensed under the MIT license check gopkg.in/check.v1 Licensed under the BSD-3-Clause license errgo gopkg.in/errgo.v2 Licensed under the BSD-3-Clause license File system notifications for Go gopkg.in/fsnotify.v1 Licensed under the BSD-3-Clause license inf gopkg.in/inf.v0 Licensed under the BSD-3-Clause license lumberjack gopkg.in/natefinch/lumberjack.v2 Licensed under the MIT license tomb gopkg.in/tomb.v1 Licensed under the BSD-3-Clause license gopkg.in/yaml.v2 Licensed under the Apache-2.0 license with the following notice: Copyright 2011-2016 Canonical Ltd. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. YAML support for the Go language gopkg.in/yaml.v3 Licensed under the Apache-2.0 license with the following notice: Copyright 2011-2016 Canonical Ltd. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. go-tools honnef.co/go/tools Licensed under the MIT license api k8s.io/api Licensed under the Apache-2.0 license apimachinery k8s.io/apimachinery Licensed under the Apache-2.0 license client-go k8s.io/client-go Licensed under the Apache-2.0 license gengo k8s.io/gengo Licensed under the Apache-2.0 license klog k8s.io/klog/v2 Licensed under the Apache-2.0 license kube-openapi k8s.io/kube-openapi Licensed under the Apache-2.0 license utils k8s.io/utils Licensed under the Apache-2.0 license binaryregexp rsc.io/binaryregexp Licensed under the BSD-3-Clause license quote rsc.io/quote/v3 Licensed under the BSD-3-Clause license sampler rsc.io/sampler Licensed under the BSD-3-Clause license Structured Merge and Diff sigs.k8s.io/structured-merge-diff/v4 Licensed under the Apache-2.0 license YAML marshaling and unmarshaling support for Go sigs.k8s.io/yaml Licensed under the MIT license Licenses: MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License. \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\" \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: You must give any other recipients of the Work or Derivative Works a copy of this License; and You must cause any modified files to carry prominent notices stating that You changed the files; and You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS BSD-3-Clause License Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. BSD-2-Clause License Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ISC License Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Mozilla Public License, version 2.0 1. Definitions 1.1. \"Contributor\" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 1.2. \"Contributor Version\" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. 1.3. \"Contribution\" means Covered Software of a particular Contributor. 1.4. \"Covered Software\" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 1.5. \"Incompatible With Secondary Licenses\" means a. that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or b. that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 1.6. \"Executable Form\" means any form of the work other than Source Code Form. 1.7. \"Larger Work\" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 1.8. \"License\" means this document. 1.9. \"Licensable\" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 1.10. \"Modifications\" means any of the following: a. any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or b. any new file in Source Code Form that contains any Covered Software. 1.11. \"Patent Claims\" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 1.12. \"Secondary License\" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 1.13. \"Source Code Form\" means the form of the work preferred for making modifications. 1.14. \"You\" (or \"Your\") means an individual or a legal entity exercising rights under this License. For legal entities, \"You\" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, \"control\" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. 2. License Grants and Conditions 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: a. under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and b. under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: a. for any code that a Contributor has removed from Covered Software; or b. for infringements caused by: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or c. under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 3. Responsibilities 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: a. such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and b. You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. 4. Inability to Comply Due to Statute or Regulation If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. 5. Termination 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. 6. Disclaimer of Warranty Covered Software is provided under this License on an \"as is\" basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer. 7. Limitation of Liability Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party's negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. 8. Litigation Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. 9. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. 10. Versions of the License 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. Exhibit B - \"Incompatible With Secondary Licenses\" Notice This Source Code Form is \"Incompatible With Secondary Licenses\", as defined by the Mozilla Public License, v. 2.0.","title":"HPE CSI Info Metrics Provider for Prometheus"},{"location":"legal/support/index.html","text":"Statement \u00b6 Software components documented on SCOD are generally covered with valid support contract on the HPE product being used. Terms and conditions may be found in the support contract. Please reach out to your official HPE representative or HPE partner for any uncertainties. CSI Info Metrics Provider support \u00b6 The HPE CSI Info Metrics Provider for Prometheus is supported by HPE when used with HPE storage arrays on valid support contracts. Send email to support@nimblestorage.com to get started with any issue that requires assistance. Engage your HPE representative for other means to contact HPE Storage support directly. Container Storage Providers \u00b6 Each Container Storage Provider (CSP) uses their own official support routes to resolve any issues with the HPE CSI Driver for Kuberernetes and the respective CSP. HPE Alletra 5000/6000 and Nimble Storage Container Storage Provider support \u00b6 This software is supported by HPE when used with HPE Nimble Storage arrays on valid support contracts. Please send an email to support@nimblestorage.com to get started with any issue you might need assistance with. Engage with your HPE representative for other means on how to get in touch with Nimble support directly. The HPE Alletra 5000/6000 and Nimble Storage organization has made a commitment to our customers to exert reasonable effort in supporting any industry-standard configuration. We do not limit our customers to only what is explicitly listed on SPOCK or the Validated Configuration Matrix (VCM), which lists tested or verified configurations (what HPE Alletra 5000/6000 and Nimble Storage organization commonly refers to as \"Qualified\" Configurations). Essentially, this means that we will exert reasonable effort to support any industry-standard configuration up to the point where we find, or become aware of, an issue that requires some other course of action * . Example cases where support may not be possible include: Configurations explicitly called out by SPOCK or the VCM as known not to work properly An OS (legacy or otherwise) that does not support or contain functionality needed by the customer A vendor that does not or will not support the requested functionality (either through a violation of their Best Practices or the product is End-of-Life/Support with that vendor) * = In the event where other vendors need to be consulted, the HPE Nimble Support team will not disengage from the Support Action. HPE Nimble Support will continue to partner with the customer and other vendors to search for the correct answers to the issue. HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Container Storage Provider support \u00b6 Limited to the HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) only. Best effort support is available for the CSP for HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Storage with All-inclusive Single or Multi-System software and an active HPE Pointnext support agreement. Since HPE Pointnext support for the CSP is best effort only, any other support levels like Warranty, Foundation Care, Proactive Care, Proactive Care Advanced and Datacenter Care or other support levels do not apply. Best effort response times are based on local standard business days and working hours. If your location is outside the customary service zone, response time may be longer. HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Hardware Contract Type Phone Number Warranty and Foundation Care 800-633-3600 Proactive Care (PC) 866-211-5211 Datacenter Care (DC) 888-751-2149","title":"Support"},{"location":"legal/support/index.html#statement","text":"Software components documented on SCOD are generally covered with valid support contract on the HPE product being used. Terms and conditions may be found in the support contract. Please reach out to your official HPE representative or HPE partner for any uncertainties.","title":"Statement"},{"location":"legal/support/index.html#csi_info_metrics_provider_support","text":"The HPE CSI Info Metrics Provider for Prometheus is supported by HPE when used with HPE storage arrays on valid support contracts. Send email to support@nimblestorage.com to get started with any issue that requires assistance. Engage your HPE representative for other means to contact HPE Storage support directly.","title":"CSI Info Metrics Provider support"},{"location":"legal/support/index.html#container_storage_providers","text":"Each Container Storage Provider (CSP) uses their own official support routes to resolve any issues with the HPE CSI Driver for Kuberernetes and the respective CSP.","title":"Container Storage Providers"},{"location":"legal/support/index.html#hpe_alletra_50006000_and_nimble_storage_container_storage_provider_support","text":"This software is supported by HPE when used with HPE Nimble Storage arrays on valid support contracts. Please send an email to support@nimblestorage.com to get started with any issue you might need assistance with. Engage with your HPE representative for other means on how to get in touch with Nimble support directly. The HPE Alletra 5000/6000 and Nimble Storage organization has made a commitment to our customers to exert reasonable effort in supporting any industry-standard configuration. We do not limit our customers to only what is explicitly listed on SPOCK or the Validated Configuration Matrix (VCM), which lists tested or verified configurations (what HPE Alletra 5000/6000 and Nimble Storage organization commonly refers to as \"Qualified\" Configurations). Essentially, this means that we will exert reasonable effort to support any industry-standard configuration up to the point where we find, or become aware of, an issue that requires some other course of action * . Example cases where support may not be possible include: Configurations explicitly called out by SPOCK or the VCM as known not to work properly An OS (legacy or otherwise) that does not support or contain functionality needed by the customer A vendor that does not or will not support the requested functionality (either through a violation of their Best Practices or the product is End-of-Life/Support with that vendor) * = In the event where other vendors need to be consulted, the HPE Nimble Support team will not disengage from the Support Action. HPE Nimble Support will continue to partner with the customer and other vendors to search for the correct answers to the issue.","title":"HPE Alletra 5000/6000 and Nimble Storage Container Storage Provider support"},{"location":"legal/support/index.html#hpe_alletra_storage_mp_alletra_9000_and_primera_and_3par_container_storage_provider_support","text":"Limited to the HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Storage Container Storage Provider (CSP) only. Best effort support is available for the CSP for HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Storage with All-inclusive Single or Multi-System software and an active HPE Pointnext support agreement. Since HPE Pointnext support for the CSP is best effort only, any other support levels like Warranty, Foundation Care, Proactive Care, Proactive Care Advanced and Datacenter Care or other support levels do not apply. Best effort response times are based on local standard business days and working hours. If your location is outside the customary service zone, response time may be longer. HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Hardware Contract Type Phone Number Warranty and Foundation Care 800-633-3600 Proactive Care (PC) 866-211-5211 Datacenter Care (DC) 888-751-2149","title":"HPE Alletra Storage MP, Alletra 9000 and Primera and 3PAR Container Storage Provider support"},{"location":"partners/index.html","text":"Partner Ecosystems \u00b6 HPE Ezmeral Runtime Enterprise Amazon EKS Anywhere Canonical Cohesity Commvault Kasten by Veeam Mirantis Red Hat OpenShift SUSE Harvester SUSE Rancher Tanzu Kubernetes Grid Integrated VMware Tip The HPE CSI Driver for Kubernetes will work on any CNCF certified Kubernetes distribution. Verify compute node OS and Kubernetes version in the Compatibility and Support table .","title":"Partner Ecosystems"},{"location":"partners/index.html#partner_ecosystems","text":"HPE Ezmeral Runtime Enterprise Amazon EKS Anywhere Canonical Cohesity Commvault Kasten by Veeam Mirantis Red Hat OpenShift SUSE Harvester SUSE Rancher Tanzu Kubernetes Grid Integrated VMware Tip The HPE CSI Driver for Kubernetes will work on any CNCF certified Kubernetes distribution. Verify compute node OS and Kubernetes version in the Compatibility and Support table .","title":"Partner Ecosystems"},{"location":"partners/amazon_eks_anywhere/index.html","text":"Overview \u00b6 Amazon Elastic Kubernetes Service (EKS) Anywhere allows customers to deploy Amazon EKS-D (Amazon Elastic Kubernetes Service Distro) on their private or non-AWS clouds. AWS users familiar with the ecosystem gain the ability to cross clouds and manage their Kubernetes estate in a single pane of glass. This documentation outlines the limitations and considerations when using the HPE CSI Driver for Kubernetes when deployed on EKS-D. Overview Limitations Bottlerocket OS EKS Anywhere on vSphere Installation Considerations Limitations \u00b6 These limitations may be expanded or detracted in future releases of either Amazon EKS Anywhere or HPE CSI Driver. Bottlerocket OS \u00b6 The default Linux distribution AWS favors is Bottlerocket OS which is a container-optimized distribution. Due to the slim host library and binary surface, Bottlerocket OS does not include the necessary utilities to support SAN storage. This limitation can be tracked in this GitHub issue . Note Any other OS supported by EKS-A and is listed in the Compatibility and Support table is supported by the HPE CSI Driver. EKS Anywhere on vSphere \u00b6 Only iSCSI is supported as the HPE CSI Driver does not support NPIV which is required for virtual Fibre Channel host bus adapters (HBA). More information on this limitation is elaborated on in the VMware section on SCOD. Due to VSphereMachineConfig VM templates only allow a single vNIC, no multipath redundancy is available to the host. Ensure network fault tolerance according to VMware best practices is available to the VM. Also keep in mind that the backend storage system needs to have a data interface in the same subnet as the HPE CSI Driver will not try to discover targets over routed networks. Tip The vSphere CSI Driver and HPE CSI Driver may co-exist in the same cluster but make sure there's only one default StorageClass configured before creating PersistentVolumeClaims . Please see the official Kubernetes documentation on how to change the default StorageClass . Installation Considerations \u00b6 EKS-D is a CNCF compliant Kubernetes distribution and no special steps are required to deploy the HPE CSI Driver for Kubernetes. It's crucial to ensure that the compute nodes run a supported OS and the version of Kubernetes is supported by the HPE CSI Driver. Check the Compatibility and Support table for more information. Proceed to installation documentation: HPE CSI Driver for Kubernetes Helm chart on Artifact Hub (recommended) HPE CSI Operator for Kubernetes on OperatorHub.io Advanced Install using YAML manifests","title":"Amazon EKS Anywhere"},{"location":"partners/amazon_eks_anywhere/index.html#overview","text":"Amazon Elastic Kubernetes Service (EKS) Anywhere allows customers to deploy Amazon EKS-D (Amazon Elastic Kubernetes Service Distro) on their private or non-AWS clouds. AWS users familiar with the ecosystem gain the ability to cross clouds and manage their Kubernetes estate in a single pane of glass. This documentation outlines the limitations and considerations when using the HPE CSI Driver for Kubernetes when deployed on EKS-D. Overview Limitations Bottlerocket OS EKS Anywhere on vSphere Installation Considerations","title":"Overview"},{"location":"partners/amazon_eks_anywhere/index.html#limitations","text":"These limitations may be expanded or detracted in future releases of either Amazon EKS Anywhere or HPE CSI Driver.","title":"Limitations"},{"location":"partners/amazon_eks_anywhere/index.html#bottlerocket_os","text":"The default Linux distribution AWS favors is Bottlerocket OS which is a container-optimized distribution. Due to the slim host library and binary surface, Bottlerocket OS does not include the necessary utilities to support SAN storage. This limitation can be tracked in this GitHub issue . Note Any other OS supported by EKS-A and is listed in the Compatibility and Support table is supported by the HPE CSI Driver.","title":"Bottlerocket OS"},{"location":"partners/amazon_eks_anywhere/index.html#eks_anywhere_on_vsphere","text":"Only iSCSI is supported as the HPE CSI Driver does not support NPIV which is required for virtual Fibre Channel host bus adapters (HBA). More information on this limitation is elaborated on in the VMware section on SCOD. Due to VSphereMachineConfig VM templates only allow a single vNIC, no multipath redundancy is available to the host. Ensure network fault tolerance according to VMware best practices is available to the VM. Also keep in mind that the backend storage system needs to have a data interface in the same subnet as the HPE CSI Driver will not try to discover targets over routed networks. Tip The vSphere CSI Driver and HPE CSI Driver may co-exist in the same cluster but make sure there's only one default StorageClass configured before creating PersistentVolumeClaims . Please see the official Kubernetes documentation on how to change the default StorageClass .","title":"EKS Anywhere on vSphere"},{"location":"partners/amazon_eks_anywhere/index.html#installation_considerations","text":"EKS-D is a CNCF compliant Kubernetes distribution and no special steps are required to deploy the HPE CSI Driver for Kubernetes. It's crucial to ensure that the compute nodes run a supported OS and the version of Kubernetes is supported by the HPE CSI Driver. Check the Compatibility and Support table for more information. Proceed to installation documentation: HPE CSI Driver for Kubernetes Helm chart on Artifact Hub (recommended) HPE CSI Operator for Kubernetes on OperatorHub.io Advanced Install using YAML manifests","title":"Installation Considerations"},{"location":"partners/canonical/index.html","text":"Overview \u00b6 \"Canonical Kubernetes is pure upstream and works on any cloud, from bare metal to public and edge. Deploy single node and multi-node clusters with Charmed Kubernetes and MicroK8s to support container orchestration, from testing to production. Both distributions bring the latest innovations from the Kubernetes community within a week of upstream release, allowing for time to learn, experiment and upskill.\" 1 1 = quote from Canonical Kubernetes . HPE supports Ubuntu LTS releases along with recent upstream versions of Kubernetes for the HPE CSI Driver. As long as the CSI driver is installed on a supported host OS with a CNCF certified Kubernetes distribution, the solution is supported. Both Charmed Kubernetes on private cloud and MicroK8s for edge has been field tested with the HPE CSI Driver for Kubernetes by HPE. Overview Charmed Kubernetes Notes on VMware vSphere Installing the HPE CSI Driver on Charmed Kubernetes MicroK8s Notes on using MicroK8s with the HPE CSI Driver Installing the HPE CSI Driver on MicroK8s Integration Guides Charmed Kubernetes \u00b6 Charmed Kubernetes is deployed with the Juju orchestration engine. Juju is capable of deploying and managing the full life-cycle of CNCF certified Kubernetes on various infrastructure providers, both private and public. Charmed utilize Ubuntu LTS for the node OS. It's most relevant for HPE CSI Driver users when deployed on Canonical MAAS and VMware vSphere. Notes on VMware vSphere \u00b6 It's important to keep in mind that only iSCSI is supported with the HPE CSI Driver on vSphere. If Fibre Channel is being used, consider deploying the vSphere CSI Driver instead. When deploying Charmed Kubernetes with Juju, machines may only be deployed with a \"primary-network\" and an \"external-network\" option. The \"primary-network\" is used primarily for the Juju controller but may be dual purposed. In this situation, the machines will end up sharing the subnet with iSCSI data traffic (using a single network path) and application or control-plane traffic, this is sub-optimal from a performance and data availability (only one network path for iSCSI) perspective. Note Canonical MAAS has not been formally tested at this time to provide guidance but the solution is supported by HPE. Installing the HPE CSI Driver on Charmed Kubernetes \u00b6 No special considerations needs to be taken when installing the HPE CSI Driver on Charmed Kubernetes. It's recommended to use the Helm chart. HPE CSI Driver for Kubernetes Helm chart on ArtifactHub. When the chart is installed, Add an HPE Storage Backend . MicroK8s \u00b6 MicroK8s is an opinionated lightweight fully certified CNCF Kubernetes distribution. It's easy to install and manage. Notes on using MicroK8s with the HPE CSI Driver \u00b6 MicroK8s is only supported by the HPE CSI Driver on Ubuntu LTS releases at this time. It will most likely work on other Linux distributions. Important Older versions of MicroK8s did not allow the CSI driver privileged Pods and some tweaking may be needed in the controller-manager of MicroK8s. Please use a recent version of MicroK8s and Ubuntu LTS to avoid problems. Installing the HPE CSI Driver on MicroK8s \u00b6 As MicroK8s is installed with confinement using snap , the \"kubeletRootDir\" needs to be configured when installing the Helm chart or Operator. Advanced install with YAML is strongly discouraged. Install the Helm chart: microk8s helm install --create-namespace \\ --set kubeletRootDir=/var/snap/microk8s/common/var/lib/kubelet \\ -n hpe-storage my-hpe-csi-driver hpe-storage/hpe-csi-driver Go ahead and Add an HPE Storage Backend . Hint When installing the chart on other Linux distributions than Ubuntu LTS, the \"kubeletRootDir\" will most likely differ. Integration Guides \u00b6 HPE and Canonical have partnered to create integration guides with Charmed Kubernetes for the different storage backends. Charmed Kubernetes integration with HPE Alletra Storage 9000 by Canonical and HPE Charmed Kubernetes integration with HPE Alletra Storage 6000 by Canonical and HPE These integration guides are also available on ubuntu.com/engage .","title":"Canonical"},{"location":"partners/canonical/index.html#overview","text":"\"Canonical Kubernetes is pure upstream and works on any cloud, from bare metal to public and edge. Deploy single node and multi-node clusters with Charmed Kubernetes and MicroK8s to support container orchestration, from testing to production. Both distributions bring the latest innovations from the Kubernetes community within a week of upstream release, allowing for time to learn, experiment and upskill.\" 1 1 = quote from Canonical Kubernetes . HPE supports Ubuntu LTS releases along with recent upstream versions of Kubernetes for the HPE CSI Driver. As long as the CSI driver is installed on a supported host OS with a CNCF certified Kubernetes distribution, the solution is supported. Both Charmed Kubernetes on private cloud and MicroK8s for edge has been field tested with the HPE CSI Driver for Kubernetes by HPE. Overview Charmed Kubernetes Notes on VMware vSphere Installing the HPE CSI Driver on Charmed Kubernetes MicroK8s Notes on using MicroK8s with the HPE CSI Driver Installing the HPE CSI Driver on MicroK8s Integration Guides","title":"Overview"},{"location":"partners/canonical/index.html#charmed_kubernetes","text":"Charmed Kubernetes is deployed with the Juju orchestration engine. Juju is capable of deploying and managing the full life-cycle of CNCF certified Kubernetes on various infrastructure providers, both private and public. Charmed utilize Ubuntu LTS for the node OS. It's most relevant for HPE CSI Driver users when deployed on Canonical MAAS and VMware vSphere.","title":"Charmed Kubernetes"},{"location":"partners/canonical/index.html#notes_on_vmware_vsphere","text":"It's important to keep in mind that only iSCSI is supported with the HPE CSI Driver on vSphere. If Fibre Channel is being used, consider deploying the vSphere CSI Driver instead. When deploying Charmed Kubernetes with Juju, machines may only be deployed with a \"primary-network\" and an \"external-network\" option. The \"primary-network\" is used primarily for the Juju controller but may be dual purposed. In this situation, the machines will end up sharing the subnet with iSCSI data traffic (using a single network path) and application or control-plane traffic, this is sub-optimal from a performance and data availability (only one network path for iSCSI) perspective. Note Canonical MAAS has not been formally tested at this time to provide guidance but the solution is supported by HPE.","title":"Notes on VMware vSphere"},{"location":"partners/canonical/index.html#installing_the_hpe_csi_driver_on_charmed_kubernetes","text":"No special considerations needs to be taken when installing the HPE CSI Driver on Charmed Kubernetes. It's recommended to use the Helm chart. HPE CSI Driver for Kubernetes Helm chart on ArtifactHub. When the chart is installed, Add an HPE Storage Backend .","title":"Installing the HPE CSI Driver on Charmed Kubernetes"},{"location":"partners/canonical/index.html#microk8s","text":"MicroK8s is an opinionated lightweight fully certified CNCF Kubernetes distribution. It's easy to install and manage.","title":"MicroK8s"},{"location":"partners/canonical/index.html#notes_on_using_microk8s_with_the_hpe_csi_driver","text":"MicroK8s is only supported by the HPE CSI Driver on Ubuntu LTS releases at this time. It will most likely work on other Linux distributions. Important Older versions of MicroK8s did not allow the CSI driver privileged Pods and some tweaking may be needed in the controller-manager of MicroK8s. Please use a recent version of MicroK8s and Ubuntu LTS to avoid problems.","title":"Notes on using MicroK8s with the HPE CSI Driver"},{"location":"partners/canonical/index.html#installing_the_hpe_csi_driver_on_microk8s","text":"As MicroK8s is installed with confinement using snap , the \"kubeletRootDir\" needs to be configured when installing the Helm chart or Operator. Advanced install with YAML is strongly discouraged. Install the Helm chart: microk8s helm install --create-namespace \\ --set kubeletRootDir=/var/snap/microk8s/common/var/lib/kubelet \\ -n hpe-storage my-hpe-csi-driver hpe-storage/hpe-csi-driver Go ahead and Add an HPE Storage Backend . Hint When installing the chart on other Linux distributions than Ubuntu LTS, the \"kubeletRootDir\" will most likely differ.","title":"Installing the HPE CSI Driver on MicroK8s"},{"location":"partners/canonical/index.html#integration_guides","text":"HPE and Canonical have partnered to create integration guides with Charmed Kubernetes for the different storage backends. Charmed Kubernetes integration with HPE Alletra Storage 9000 by Canonical and HPE Charmed Kubernetes integration with HPE Alletra Storage 6000 by Canonical and HPE These integration guides are also available on ubuntu.com/engage .","title":"Integration Guides"},{"location":"partners/cohesity/index.html","text":"Cohesity \u00b6 Hewlett Packard Enterprise and Cohesity offer an integrated approach to solve customer problems commonly found with containerized workloads. HPE Alletra\u2014leveraging the HPE CSI Driver for Kubernetes\u2014together with Cohesity's comprehensive data protection capabilities, empower organizations to overcome challenges associated with containerized environments. This guide will demonstrate the steps to integrate Cohesity into a Kubernetes cluster and how to configure a protection policy to back up an application Namespace , a Kubernetes resource type. It proceeds to show that a backup can be restored to a new Namespace , useful for providing a test/development environment without affecting the original application Namespace . External HPE Resources: Data Protection for Kubernetes using Cohesity with HPE Alletra ( PDF ) Protect your containerized applications with HPE and Cohesity ( Blog ) Cohesity solutions are available through HPE Complete . Cohesity Solution Overview Diagram Environment and Preparations Integrate Cohesity into Kubernetes Configure Namespace-level Application Backup Demo: Clone a Test/Development Environment by Restoring a Backup Solution Overview Diagram \u00b6 Environment and Preparations \u00b6 The HPE CSI Driver has been validated on Cohesity DataProtect v7.0u1. Check that the HPE CSI Driver and Cohesity software versions are compatible with the Kubernetes version being used. This environment assumes the HPE CSI Driver for Kubernetes is deployed in the Kubernetes cluster, an Alletra storage backend has been configured, and a default StorageClass has been defined. Review Cohesity's \" Plan and Prepare \" documentation to accomplish the following: Firewall considerations. Kubernetes ServiceAccount with cluster-admin permissions. Extract Bearer token ID from above ServiceAccount Obtain Cohesity Datamover (download) and push to a local repository or public registry. Note Cohesity only supports the backup of user-created application Namespaces and does not support the backup of infrastructure Namespaces such as kube-system , etc. Integrate Cohesity into Kubernetes \u00b6 Review Cohesity's \" Register and Manage Kubernetes Cluster \" documentation to integrate Cohesity into your Kubernetes cluster. Below is an example screenshot of the Register Kubernetes Source dialog: After the integration wizard is submitted, see the Post-Registration task documentation to verify Velero and datamover pod availability. Note The latest versions of Kubernetes, although present in the Cohesity support matrix , may still require an override from Cohesity support. Configure Namespace-level Application Backup \u00b6 A Namespace containing a WordPress application will be protected in this example. It contains a variety of Kubernetes resources and objects including: Configuration and Storage: PersistentVolumeClaim , ConfigMap , and Secret Service and ServiceAccount Workloads: Deployment , ReplicaSet and StatefulSet Review the Protect Kubernetes Namespaces documentation from Cohesity. Create a new protection policy or use an available default policy. Additionally, see the Manage the Kubernetes Backup Configuration documentation to add/remove Namespaces to a protection group, adjust Auto Protect settings, modify the Protection Policy, and trigger an on-demand run. See the screenshot below for an example backup Run details view. Demo: Clone a Test/Development Environment by Restoring a Backup \u00b6 Review the Cohesity documentation for Recover Kubernetes Cluster . Cohesity notes, at time of writing, that granular-level recovery of Namespace resource types is not supported. Consider the following when defining a recovery operation: Select a protection group or individual Namespace . If a protection group is chosen, multiple Namespace resources could be affected on recovery. If any previously backed up objects exist in the destination, a restore operation will not overwrite them. For applications deployed by Helm chart, recovery operations applied to new clusters or Namespaces will not be managed with Helm. If an alternate Kubernetes cluster is chosen ( New Location in the UI), be sure that the cluster has access to the same Kubernetes StorageClass as the backup\u2019s source cluster. Note Protection groups and individual Namespace resources appear in the same list. Available Namespaces are denoted with the Kubernetes ship wheel icon. For this example, a WordPress Namespace backup will be restored to the source Kubernetes cluster but under a new Namespace with a \"debug-\" prefix (see below). This application can run alongside and separately from the parent application. After the recovery process is complete we can review and compare the associated objects between the two Namespaces . In particular, names are similar but discrete PersistentVolumes , IPs and Services exist for each Namespace . $ diff <(kubectl get all,pvc -n wordpress-orig) <(kubectl get all,pvc -n debug-wordpress-orig) 2,3c2,3 - pod/wordpress-577cc47468-mbg2n 1/1 Running 0 171m - pod/wordpress-mariadb-0 1/1 Running 0 171m --- + pod/wordpress-577cc47468-mbg2n 1/1 Running 0 57m + pod/wordpress-mariadb-0 1/1 Running 0 57m 6,7c6,7 - service/wordpress LoadBalancer 10.98.47.101 80:30657/TCP,443:30290/TCP 171m - service/wordpress-mariadb ClusterIP 10.104.190.60 3306/TCP 171m --- + service/wordpress LoadBalancer 10.109.247.83 80:31425/TCP,443:31002/TCP 57m + service/wordpress-mariadb ClusterIP 10.101.77.139 3306/TCP 57m 10c10 - deployment.apps/wordpress 1/1 1 1 171m --- + deployment.apps/wordpress 1/1 1 1 57m 13c13 - replicaset.apps/wordpress-577cc47468 1 1 1 171m --- + replicaset.apps/wordpress-577cc47468 1 1 1 57m 16c16 - statefulset.apps/wordpress-mariadb 1/1 171m --- + statefulset.apps/wordpress-mariadb 1/1 57m 19,20c19,20 - persistentvolumeclaim/data-wordpress-mariadb-0 Bound pvc-4b3222c3-f71f-427f-847b-d6d0c5e019a4 8Gi RWO a9060-std 171m - persistentvolumeclaim/wordpress Bound pvc-72158104-06ae-4547-9f80-d551abd7cda5 10Gi RWO a9060-std 171m --- + persistentvolumeclaim/data-wordpress-mariadb-0 Bound pvc-306164a8-3334-48ac-bdee-273ac9a97403 8Gi RWO a9060-std 59m + persistentvolumeclaim/wordpress Bound pvc-17a55296-d0fb-44c2-968b-09c6ffc4abc9 10Gi RWO a9060-std 59m Note Above links are external to docs.cohesity.com and require a MyCohesity account.","title":"Cohesity"},{"location":"partners/cohesity/index.html#cohesity","text":"Hewlett Packard Enterprise and Cohesity offer an integrated approach to solve customer problems commonly found with containerized workloads. HPE Alletra\u2014leveraging the HPE CSI Driver for Kubernetes\u2014together with Cohesity's comprehensive data protection capabilities, empower organizations to overcome challenges associated with containerized environments. This guide will demonstrate the steps to integrate Cohesity into a Kubernetes cluster and how to configure a protection policy to back up an application Namespace , a Kubernetes resource type. It proceeds to show that a backup can be restored to a new Namespace , useful for providing a test/development environment without affecting the original application Namespace . External HPE Resources: Data Protection for Kubernetes using Cohesity with HPE Alletra ( PDF ) Protect your containerized applications with HPE and Cohesity ( Blog ) Cohesity solutions are available through HPE Complete . Cohesity Solution Overview Diagram Environment and Preparations Integrate Cohesity into Kubernetes Configure Namespace-level Application Backup Demo: Clone a Test/Development Environment by Restoring a Backup","title":"Cohesity"},{"location":"partners/cohesity/index.html#solution_overview_diagram","text":"","title":"Solution Overview Diagram"},{"location":"partners/cohesity/index.html#environment_and_preparations","text":"The HPE CSI Driver has been validated on Cohesity DataProtect v7.0u1. Check that the HPE CSI Driver and Cohesity software versions are compatible with the Kubernetes version being used. This environment assumes the HPE CSI Driver for Kubernetes is deployed in the Kubernetes cluster, an Alletra storage backend has been configured, and a default StorageClass has been defined. Review Cohesity's \" Plan and Prepare \" documentation to accomplish the following: Firewall considerations. Kubernetes ServiceAccount with cluster-admin permissions. Extract Bearer token ID from above ServiceAccount Obtain Cohesity Datamover (download) and push to a local repository or public registry. Note Cohesity only supports the backup of user-created application Namespaces and does not support the backup of infrastructure Namespaces such as kube-system , etc.","title":"Environment and Preparations"},{"location":"partners/cohesity/index.html#integrate_cohesity_into_kubernetes","text":"Review Cohesity's \" Register and Manage Kubernetes Cluster \" documentation to integrate Cohesity into your Kubernetes cluster. Below is an example screenshot of the Register Kubernetes Source dialog: After the integration wizard is submitted, see the Post-Registration task documentation to verify Velero and datamover pod availability. Note The latest versions of Kubernetes, although present in the Cohesity support matrix , may still require an override from Cohesity support.","title":"Integrate Cohesity into Kubernetes"},{"location":"partners/cohesity/index.html#configure_namespace-level_application_backup","text":"A Namespace containing a WordPress application will be protected in this example. It contains a variety of Kubernetes resources and objects including: Configuration and Storage: PersistentVolumeClaim , ConfigMap , and Secret Service and ServiceAccount Workloads: Deployment , ReplicaSet and StatefulSet Review the Protect Kubernetes Namespaces documentation from Cohesity. Create a new protection policy or use an available default policy. Additionally, see the Manage the Kubernetes Backup Configuration documentation to add/remove Namespaces to a protection group, adjust Auto Protect settings, modify the Protection Policy, and trigger an on-demand run. See the screenshot below for an example backup Run details view.","title":"Configure Namespace-level Application Backup"},{"location":"partners/cohesity/index.html#demo_clone_a_testdevelopment_environment_by_restoring_a_backup","text":"Review the Cohesity documentation for Recover Kubernetes Cluster . Cohesity notes, at time of writing, that granular-level recovery of Namespace resource types is not supported. Consider the following when defining a recovery operation: Select a protection group or individual Namespace . If a protection group is chosen, multiple Namespace resources could be affected on recovery. If any previously backed up objects exist in the destination, a restore operation will not overwrite them. For applications deployed by Helm chart, recovery operations applied to new clusters or Namespaces will not be managed with Helm. If an alternate Kubernetes cluster is chosen ( New Location in the UI), be sure that the cluster has access to the same Kubernetes StorageClass as the backup\u2019s source cluster. Note Protection groups and individual Namespace resources appear in the same list. Available Namespaces are denoted with the Kubernetes ship wheel icon. For this example, a WordPress Namespace backup will be restored to the source Kubernetes cluster but under a new Namespace with a \"debug-\" prefix (see below). This application can run alongside and separately from the parent application. After the recovery process is complete we can review and compare the associated objects between the two Namespaces . In particular, names are similar but discrete PersistentVolumes , IPs and Services exist for each Namespace . $ diff <(kubectl get all,pvc -n wordpress-orig) <(kubectl get all,pvc -n debug-wordpress-orig) 2,3c2,3 - pod/wordpress-577cc47468-mbg2n 1/1 Running 0 171m - pod/wordpress-mariadb-0 1/1 Running 0 171m --- + pod/wordpress-577cc47468-mbg2n 1/1 Running 0 57m + pod/wordpress-mariadb-0 1/1 Running 0 57m 6,7c6,7 - service/wordpress LoadBalancer 10.98.47.101 80:30657/TCP,443:30290/TCP 171m - service/wordpress-mariadb ClusterIP 10.104.190.60 3306/TCP 171m --- + service/wordpress LoadBalancer 10.109.247.83 80:31425/TCP,443:31002/TCP 57m + service/wordpress-mariadb ClusterIP 10.101.77.139 3306/TCP 57m 10c10 - deployment.apps/wordpress 1/1 1 1 171m --- + deployment.apps/wordpress 1/1 1 1 57m 13c13 - replicaset.apps/wordpress-577cc47468 1 1 1 171m --- + replicaset.apps/wordpress-577cc47468 1 1 1 57m 16c16 - statefulset.apps/wordpress-mariadb 1/1 171m --- + statefulset.apps/wordpress-mariadb 1/1 57m 19,20c19,20 - persistentvolumeclaim/data-wordpress-mariadb-0 Bound pvc-4b3222c3-f71f-427f-847b-d6d0c5e019a4 8Gi RWO a9060-std 171m - persistentvolumeclaim/wordpress Bound pvc-72158104-06ae-4547-9f80-d551abd7cda5 10Gi RWO a9060-std 171m --- + persistentvolumeclaim/data-wordpress-mariadb-0 Bound pvc-306164a8-3334-48ac-bdee-273ac9a97403 8Gi RWO a9060-std 59m + persistentvolumeclaim/wordpress Bound pvc-17a55296-d0fb-44c2-968b-09c6ffc4abc9 10Gi RWO a9060-std 59m Note Above links are external to docs.cohesity.com and require a MyCohesity account.","title":"Demo: Clone a Test/Development Environment by Restoring a Backup"},{"location":"partners/commvault/index.html","text":"Overview \u00b6 The Commvault intelligent data management platform provides Kubernetes-native protection, application mobility, and disaster recovery for containerized applications. Combined with Commvault Command Center\u2122, Commvault provides enterprise IT operations and DevOps teams an easy-to-use, self-service dashboard for managing the protection of Kubernetes. HPE and Commvault collaborate continuously to deliver assets relevant to our joint customers. Data protection for Kubernetes using Commvault Backup & Recovery, HPE Apollo Servers, and HPE CSI Driver for Kubernetes ( PDF ) Data Protection for Kubernetes using Commvault Backup & Recovery with HPE Alletra ( YouTube ) Learn more about HPE and Commvault's partnership here: https://www.commvault.com/supported-technologies/hpe . Overview Pre-requisites Permissions Cluster requirements Configure Kubernetes protection Backup and Restores Pre-requisites \u00b6 The HPE CSI Driver has been validated on Commvault Complete Backup and Recovery 2022E. Check that the HPE CSI Driver and Commvault software versions are compatible with the Kubernetes version being used. Permissions \u00b6 This guide assumes you have administrative access to Commvault Command Center and administrator access to a Kubernetes cluster with kubectl . Refer to the Creating a Service Account for Kubernetes Authentication documentation to define a serviceaccount and clusterrolebinding with cluster-admin permissions. Cluster requirements \u00b6 The cluster needs to be running Kubernetes 1.22 or later and have the CSI snapshot CustomResourceDefinitions (CRDs) and the CSI external snapshotter deployed. Follow the guides available on SCOD to: Enable CSI snapshots Using CSI snapshots Note The rest of this guide assumes the default VolumeSnapshotClass and VolumeSnapshots are functional within the cluster with a compatible Kubernetes snapshot API level between the CSI driver and Commvault. Configure Kubernetes protection \u00b6 To configure data protection for Kubernetes, follow the official Commvault documentation and ensure the version matches the software version in your environment. As a summary, complete the following: Core Setup Wizard to complete Commvault deployment Review System Requirements for Kubernetes Complete the Kubernetes Guided Setup Backup and Restores \u00b6 To perform snapshot and restore operations through Commvault using the HPE CSI Driver for Kubernetes, please refer to the Commvault documentation. Backup Restores Note Above links are external to documentation.commvault.com .","title":"Commvault"},{"location":"partners/commvault/index.html#overview","text":"The Commvault intelligent data management platform provides Kubernetes-native protection, application mobility, and disaster recovery for containerized applications. Combined with Commvault Command Center\u2122, Commvault provides enterprise IT operations and DevOps teams an easy-to-use, self-service dashboard for managing the protection of Kubernetes. HPE and Commvault collaborate continuously to deliver assets relevant to our joint customers. Data protection for Kubernetes using Commvault Backup & Recovery, HPE Apollo Servers, and HPE CSI Driver for Kubernetes ( PDF ) Data Protection for Kubernetes using Commvault Backup & Recovery with HPE Alletra ( YouTube ) Learn more about HPE and Commvault's partnership here: https://www.commvault.com/supported-technologies/hpe . Overview Pre-requisites Permissions Cluster requirements Configure Kubernetes protection Backup and Restores","title":"Overview"},{"location":"partners/commvault/index.html#pre-requisites","text":"The HPE CSI Driver has been validated on Commvault Complete Backup and Recovery 2022E. Check that the HPE CSI Driver and Commvault software versions are compatible with the Kubernetes version being used.","title":"Pre-requisites"},{"location":"partners/commvault/index.html#permissions","text":"This guide assumes you have administrative access to Commvault Command Center and administrator access to a Kubernetes cluster with kubectl . Refer to the Creating a Service Account for Kubernetes Authentication documentation to define a serviceaccount and clusterrolebinding with cluster-admin permissions.","title":"Permissions"},{"location":"partners/commvault/index.html#cluster_requirements","text":"The cluster needs to be running Kubernetes 1.22 or later and have the CSI snapshot CustomResourceDefinitions (CRDs) and the CSI external snapshotter deployed. Follow the guides available on SCOD to: Enable CSI snapshots Using CSI snapshots Note The rest of this guide assumes the default VolumeSnapshotClass and VolumeSnapshots are functional within the cluster with a compatible Kubernetes snapshot API level between the CSI driver and Commvault.","title":"Cluster requirements"},{"location":"partners/commvault/index.html#configure_kubernetes_protection","text":"To configure data protection for Kubernetes, follow the official Commvault documentation and ensure the version matches the software version in your environment. As a summary, complete the following: Core Setup Wizard to complete Commvault deployment Review System Requirements for Kubernetes Complete the Kubernetes Guided Setup","title":"Configure Kubernetes protection"},{"location":"partners/commvault/index.html#backup_and_restores","text":"To perform snapshot and restore operations through Commvault using the HPE CSI Driver for Kubernetes, please refer to the Commvault documentation. Backup Restores Note Above links are external to documentation.commvault.com .","title":"Backup and Restores"},{"location":"partners/kasten/index.html","text":"Overview \u00b6 Kasten K10 by Veeam is a data management platform designed to run natively on Kubernetes to protect applications. K10 integrates seamlessly with the HPE CSI Driver for Kubernetes thanks to the native support for CSI VolumeSnapshots and VolumeSnapshotClasses . HPE and Veeam have a long-standing alliance. Read about the extended partnership with Kasten in this blog post . Tip All the steps below are captured in a tutorial available on YouTube and in the SCOD Video Gallery . Overview Prerequisites Annotate the VolumeSnapshotClass Installing Kasten K10 Snapshots and restores Prerequisites \u00b6 The cluster needs to be running Kubernetes 1.17 or later and have the CSI snapshot CustomResourceDefinitions (CRDs) and the CSI snapshot-controller deployed. Follow the guides available on SCOD to: Enable CSI snapshots Using CSI snapshots Note The rest of this guide assumes a default VolumeSnapshotClass and VolumeSnapshots are functional on the cluster. Annotate the VolumeSnapshotClass \u00b6 In order to allow K10 to perform snapshots and restores using the VolumeSnapshotClass , it needs an annotation. Assuming we have a default VolumeSnapshotClass named \"hpe-snapshot\": kubectl annotate volumesnapshotclass hpe-snapshot k10.kasten.io/is-snapshot-class=true Installing Kasten K10 \u00b6 Kasten K10 installs in its own namespace using a Helm chart. It also assumes there's a performant default StorageClass on the cluster to serve the various PersistentVolumeClaims needed for the controllers. Pre-flight checks and prerequisites Install K10 on Kubernetes Note Above links are external to docs.kasten.io . Snapshots and restores \u00b6 Kasten K10 provides the user with a graphical interface and dashboard to schedule and perform data management operations. There's also an API that can be manipulated with kubectl using CRDs. To perform snapshot and restore operations through Kasten K10 using the HPE CSI Driver for Kubernetes, please refer to the Kasten K10 documentation. Accessing K10 Using K10 Note Above links are external to docs.kasten.io .","title":"Kasten by Veeam"},{"location":"partners/kasten/index.html#overview","text":"Kasten K10 by Veeam is a data management platform designed to run natively on Kubernetes to protect applications. K10 integrates seamlessly with the HPE CSI Driver for Kubernetes thanks to the native support for CSI VolumeSnapshots and VolumeSnapshotClasses . HPE and Veeam have a long-standing alliance. Read about the extended partnership with Kasten in this blog post . Tip All the steps below are captured in a tutorial available on YouTube and in the SCOD Video Gallery . Overview Prerequisites Annotate the VolumeSnapshotClass Installing Kasten K10 Snapshots and restores","title":"Overview"},{"location":"partners/kasten/index.html#prerequisites","text":"The cluster needs to be running Kubernetes 1.17 or later and have the CSI snapshot CustomResourceDefinitions (CRDs) and the CSI snapshot-controller deployed. Follow the guides available on SCOD to: Enable CSI snapshots Using CSI snapshots Note The rest of this guide assumes a default VolumeSnapshotClass and VolumeSnapshots are functional on the cluster.","title":"Prerequisites"},{"location":"partners/kasten/index.html#annotate_the_volumesnapshotclass","text":"In order to allow K10 to perform snapshots and restores using the VolumeSnapshotClass , it needs an annotation. Assuming we have a default VolumeSnapshotClass named \"hpe-snapshot\": kubectl annotate volumesnapshotclass hpe-snapshot k10.kasten.io/is-snapshot-class=true","title":"Annotate the VolumeSnapshotClass"},{"location":"partners/kasten/index.html#installing_kasten_k10","text":"Kasten K10 installs in its own namespace using a Helm chart. It also assumes there's a performant default StorageClass on the cluster to serve the various PersistentVolumeClaims needed for the controllers. Pre-flight checks and prerequisites Install K10 on Kubernetes Note Above links are external to docs.kasten.io .","title":"Installing Kasten K10"},{"location":"partners/kasten/index.html#snapshots_and_restores","text":"Kasten K10 provides the user with a graphical interface and dashboard to schedule and perform data management operations. There's also an API that can be manipulated with kubectl using CRDs. To perform snapshot and restore operations through Kasten K10 using the HPE CSI Driver for Kubernetes, please refer to the Kasten K10 documentation. Accessing K10 Using K10 Note Above links are external to docs.kasten.io .","title":"Snapshots and restores"},{"location":"partners/mirantis/index.html","text":"Introduction \u00b6 Mirantis Kubernetes Engine (MKE) is the successor of the Universal Control Plane part of Docker Enterprise Edition (Docker EE). The HPE CSI Driver for Kubernetes allows users to provision persistent storage for Kubernetes workloads running on MKE. See the note below on Docker Swarm for workloads deployed outside of Kubernetes. Introduction Compatability Chart Helm Chart Install Mirantis Kubernetes Engine 3.3 Prerequisites Steps to install Docker Swarm Limitations Compatability Chart \u00b6 Mirantis and HPE perform testing and qualification as needed for either release of MKE or the HPE CSI Driver. If there are any deviations in the installation procedures, those will be documented here. MKE Version HPE CSI Driver Status Installation Notes 3.7 2.4.0 Supported Helm chart notes 3.6 2.2.0 Supported Helm chart notes 3.4, 3.5 - Untested - 3.3 2.0.0 Deprecated Advanced Install notes for MKE 3.3 Seealso Ensure to be understood with the limitations and the lack of Docker Swarm support. Helm Chart Install \u00b6 With MKE 3.6 and onwards, it's recommend to use the HPE CSI Driver for Kubernetes Helm chart. There are no known caveats or workarounds at this time. HPE CSI Driver for Kubernetes Helm chart on ArtifactHub. Important Always ensure the MKE version of the underlying Kubernetes version and worker node host OS conforms to the latest compatability and support table. Mirantis Kubernetes Engine 3.3 \u00b6 At the time of release of MKE 3.3, neither of the HPE CSI Driver Helm chart or operator will install correctly. Prerequisites \u00b6 The MKE managers and workers needs to run a supported host OS as outlined in the particular version of the HPE CSI Driver found in the release tables . Also verify that the HPE CSI Driver support the version Kubernetes used by MKE (see below). Steps to install \u00b6 MKE admins needs to familiarize themselves with the advanced install method of the CSI driver. Before the installation begins, make sure an account with administrative privileges is being used to the deploy the driver. Also determine the actual Kubernetes version MKE is using. kubectl version --short true Client Version: v1.19.4 Server Version: v1.18.10-mirantis-1 In this particular example, Kubernetes 1.18 is being used. Follow the steps for 1.18 highlighted within the advanced install section of the deployment documentation. Step 1 \u2192 Install the Linux node IO settings ConfigMap . Step 2 \u2192 Determine which backend being used (Nimble or Primera/3PAR) and deploy the corresponding CSP manifest. Step 3 \u2192 Deploy the HPE CSI Driver manifests for the Kubernetes version being used. Next, add a supported HPE backend and create a StorageClass . Learn more about using the CSI objects in the comprehensive overview . Also make sure to familiarize yourself with the particular features and capabilities of the backend being used. Container Storage Providers Docker Swarm \u00b6 Provisioning Docker Volumes for Docker Swarm workloads from a HPE primary storage backend is deprecated. Limitations \u00b6 HPE CSI Driver does not support Windows workers. HPE CSI Driver NFS Server Provisioner is not supported on MKE.","title":"Mirantis"},{"location":"partners/mirantis/index.html#introduction","text":"Mirantis Kubernetes Engine (MKE) is the successor of the Universal Control Plane part of Docker Enterprise Edition (Docker EE). The HPE CSI Driver for Kubernetes allows users to provision persistent storage for Kubernetes workloads running on MKE. See the note below on Docker Swarm for workloads deployed outside of Kubernetes. Introduction Compatability Chart Helm Chart Install Mirantis Kubernetes Engine 3.3 Prerequisites Steps to install Docker Swarm Limitations","title":"Introduction"},{"location":"partners/mirantis/index.html#compatability_chart","text":"Mirantis and HPE perform testing and qualification as needed for either release of MKE or the HPE CSI Driver. If there are any deviations in the installation procedures, those will be documented here. MKE Version HPE CSI Driver Status Installation Notes 3.7 2.4.0 Supported Helm chart notes 3.6 2.2.0 Supported Helm chart notes 3.4, 3.5 - Untested - 3.3 2.0.0 Deprecated Advanced Install notes for MKE 3.3 Seealso Ensure to be understood with the limitations and the lack of Docker Swarm support.","title":"Compatability Chart"},{"location":"partners/mirantis/index.html#helm_chart_install","text":"With MKE 3.6 and onwards, it's recommend to use the HPE CSI Driver for Kubernetes Helm chart. There are no known caveats or workarounds at this time. HPE CSI Driver for Kubernetes Helm chart on ArtifactHub. Important Always ensure the MKE version of the underlying Kubernetes version and worker node host OS conforms to the latest compatability and support table.","title":"Helm Chart Install"},{"location":"partners/mirantis/index.html#mirantis_kubernetes_engine_33","text":"At the time of release of MKE 3.3, neither of the HPE CSI Driver Helm chart or operator will install correctly.","title":"Mirantis Kubernetes Engine 3.3"},{"location":"partners/mirantis/index.html#prerequisites","text":"The MKE managers and workers needs to run a supported host OS as outlined in the particular version of the HPE CSI Driver found in the release tables . Also verify that the HPE CSI Driver support the version Kubernetes used by MKE (see below).","title":"Prerequisites"},{"location":"partners/mirantis/index.html#steps_to_install","text":"MKE admins needs to familiarize themselves with the advanced install method of the CSI driver. Before the installation begins, make sure an account with administrative privileges is being used to the deploy the driver. Also determine the actual Kubernetes version MKE is using. kubectl version --short true Client Version: v1.19.4 Server Version: v1.18.10-mirantis-1 In this particular example, Kubernetes 1.18 is being used. Follow the steps for 1.18 highlighted within the advanced install section of the deployment documentation. Step 1 \u2192 Install the Linux node IO settings ConfigMap . Step 2 \u2192 Determine which backend being used (Nimble or Primera/3PAR) and deploy the corresponding CSP manifest. Step 3 \u2192 Deploy the HPE CSI Driver manifests for the Kubernetes version being used. Next, add a supported HPE backend and create a StorageClass . Learn more about using the CSI objects in the comprehensive overview . Also make sure to familiarize yourself with the particular features and capabilities of the backend being used. Container Storage Providers","title":"Steps to install"},{"location":"partners/mirantis/index.html#docker_swarm","text":"Provisioning Docker Volumes for Docker Swarm workloads from a HPE primary storage backend is deprecated.","title":"Docker Swarm"},{"location":"partners/mirantis/index.html#limitations","text":"HPE CSI Driver does not support Windows workers. HPE CSI Driver NFS Server Provisioner is not supported on MKE.","title":"Limitations"},{"location":"partners/redhat_openshift/index.html","text":"Overview \u00b6 HPE and Red Hat have a long standing partnership to provide jointly supported software, platform and services with the absolute best customer experience in the industry. Red Hat OpenShift uses open source Kubernetes and various other components to deliver a PaaS experience that benefits both developers and operations. This packaged experience differs slightly on how you would deploy and use the HPE volume drivers and this page serves as the authoritative source for all things HPE primary storage and Red Hat OpenShift. Overview OpenShift 4 Certified combinations Security model Limitations Deployment Upgrading Prerequisites OpenShift web console OpenShift CLI Additional information Uninstall the HPE CSI Operator NFS Server Provisioner Considerations Non-standard hpe-nfs Namespace Operators Requesting NFS Persistent Volume Claims Use the ext4 filesystem for NFS servers StorageProfile for OpenShift Virtualization Source PVCs Live VM migrations for Alletra Storage MP Unsupported Version of the Operator Install Unsupported Helm Chart Install Steps to install. OpenShift 4 \u00b6 Software deployed on OpenShift 4 follows the Operator pattern . CSI drivers are no exception. Certified combinations \u00b6 Software delivered through the HPE and Red Hat partnership follows a rigorous certification process and only qualify what's listed as \"Certified\" in the below table. Status Red Hat OpenShift HPE CSI Operator Container Storage Providers Certified 4.16 EUS 2 2.5.1 All Certified 4.15 2.4.1, 2.4.2, 2.5.1 All Certified 4.14 EUS 2 2.4.0, 2.4.1, 2.4.2, 2.5.1 All Certified 4.13 2.4.0, 2.4.1, 2.4.2 All Certified 4.12 EUS 2 2.3.0, 2.4.0, 2.4.1, 2.4.2 All EOL 1 4.11 2.3.0 All EOL 1 4.10 EUS 2 2.2.1, 2.3.0 All 1 = End of life support per Red Hat OpenShift Life Cycle Policy . 2 = Red Hat OpenShift Extended Update Support . Check the table above periodically for future releases. Pointers Other combinations may work but will not be supported. Both Red Hat Enterprise Linux and Red Hat CoreOS worker nodes are supported. Instructions on this page only reflect the current stable version of the HPE CSI Operator and OpenShift. OpenShift Virtualization OS images are only supported on PVCs using \"RWX\" with volumeMode: Block . See below for more details. Security model \u00b6 By default, OpenShift prevents containers from running as root. Containers are run using an arbitrarily assigned user ID. Due to these security restrictions, containers that run on Docker and Kubernetes might not run successfully on Red Hat OpenShift without modification. Users deploying applications that require persistent storage (i.e. through the HPE CSI Driver) will need the appropriate permissions and Security Context Constraints (SCC) to be able to request and manage storage through OpenShift. Modifying container security to work with OpenShift is outside the scope of this document. For more information on OpenShift security, see Managing security context constraints . Note If you run into issues writing to persistent volumes provisioned by the HPE CSI Driver under a restricted SCC, add the fsMode: \"0770\" parameter to the StorageClass with RWO claims or fsMode: \"0777\" for RWX claims. Limitations \u00b6 Since the CSI Operator only provides \"Basic Install\" capabilities. The following limitations apply: The ConfigMap \"hpe-linux-config\" that controls host configuration is immutable The NFS Server Provisioner can not be used with Operators deploying PersistentVolumeClaims as part of the installation. See #295 on GitHub. Deploying the NFS Server Provisioner to a Namespace other than \"hpe-nfs\" requires a separate SCC applied to the Namespace . See #nfs_server_provisioner_considerations . Deployment \u00b6 The HPE CSI Operator for Kubernetes needs to be installed through the interfaces provided by Red Hat. Do not follow the instructions found on OperatorHub.io. Tip There's a tutorial available on YouTube accessible through the Video Gallery on how to install and use the HPE CSI Operator on Red Hat OpenShift. Upgrading \u00b6 In situations where the operator needs to be upgraded, follow the prerequisite steps in the Helm chart on Artifact Hub. Upgrading the chart Automatic Updates Do not under any circumstance enable \"Automatic Updates\" for the HPE CSI Operator for Kubernetes Once the steps have been followed for the particular version transition: Uninstall the HPECSIDriver instance Delete the \"hpecsidrivers.storage.hpe.com\" CRD : oc delete crd/hpecsidrivers.storage.hpe.com Uninstall the HPE CSI Operator for Kubernetes Proceed to installation through the OpenShift Web Console or OpenShift CLI Reapply the SCC to ensure there hasn't been any changes. Good to know Deleting the HPECSIDriver instance and uninstalling the CSI Operator does not affect any running workloads, PersistentVolumeClaims , StorageClasses or other API resources created by the CSI Operator. In-flight operations and new requests will be retried once the new HPECSIDriver has been instantiated. Prerequisites \u00b6 The HPE CSI Driver needs to run in privileged mode and needs access to host ports, host network and should be able to mount hostPath volumes. Hence, before deploying HPE CSI Operator on OpenShift, please create the following SecurityContextConstraints (SCC) to allow the CSI driver to be running with these privileges. oc new-project hpe-storage --display-name=\"HPE CSI Driver for Kubernetes\" Important The rest of this implementation guide assumes the default \"hpe-storage\" Namespace . If a different Namespace is desired. Update the ServiceAccount Namespace in the SCC below. Deploy or download the SCC: oc apply -f https://scod.hpedev.io/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml securitycontextconstraints.security.openshift.io/hpe-csi-controller-scc created securitycontextconstraints.security.openshift.io/hpe-csi-node-scc created securitycontextconstraints.security.openshift.io/hpe-csi-csp-scc created securitycontextconstraints.security.openshift.io/hpe-csi-nfs-scc created OpenShift web console \u00b6 Once the SCC has been applied to the project, login to the OpenShift web console as kube:admin and navigate to Operators -> OperatorHub . Search for 'HPE CSI' in the search field and select the non-marketplace version. Click 'Install'. Note Latest supported HPE CSI Operator on OpenShift 4.14 is 2.4.2 Select the Namespace where the SCC was applied, select 'Manual' Update Approval, click 'Install'. Click 'Approve' to finalize installation of the Operator The HPE CSI Operator is now installed, select 'View Operator'. Click 'Create Instance'. Normally, no customizations are needed, scroll all the way down and click 'Create'. By navigating to the Developer view, it should now be possible to inspect the CSI driver and Operator topology. The CSI driver is now ready for use. Next, an HPE storage backend needs to be added along with a StorageClass . See Caveats below for information on creating StorageClasses in Red Hat OpenShift. OpenShift CLI \u00b6 This provides an example Operator deployment using oc . If you want to use the web console, proceed to the previous section . It's assumed the SCC has been applied to the project and have kube:admin privileges. As an example, we'll deploy to the hpe-storage project as described in previous steps. First, an OperatorGroup needs to be created. apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: hpe-csi-driver-for-kubernetes namespace: hpe-storage spec: targetNamespaces: - hpe-storage Next, create a Subscription to the Operator. apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hpe-csi-operator namespace: hpe-storage spec: channel: stable installPlanApproval: Manual name: hpe-csi-operator source: certified-operators sourceNamespace: openshift-marketplace Next, approve the installation. oc -n hpe-storage patch $(oc get installplans -n hpe-storage -o name) -p '{\"spec\":{\"approved\":true}}' --type merge The Operator will now be installed on the OpenShift cluster. Before instantiating a CSI driver, watch the roll-out of the Operator. oc rollout status deploy/hpe-csi-driver-operator -n hpe-storage Waiting for deployment \"hpe-csi-driver-operator\" rollout to finish: 0 of 1 updated replicas are available... deployment \"hpe-csi-driver-operator\" successfully rolled out The next step is to create a HPECSIDriver object. HPE CSI Operator v2.5.1 # oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableHostDeletion: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false disableNodeMonitor: false imagePullPolicy: IfNotPresent images: csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1 csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0 csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7 csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0 csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1 csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1 csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1 csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1 csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6 csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6 csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6 nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5 nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0 primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0 iscsi: chapSecretName: \"\" kubeletRootDir: /var/lib/kubelet logLevel: info node: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] v2.4.2 # oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false imagePullPolicy: IfNotPresent iscsi: chapPassword: \"\" chapUser: \"\" kubeletRootDir: /var/lib/kubelet/ logLevel: info node: affinity: {} labels: {} nodeSelector: {} tolerations: [] registry: quay.io v2.4.1 # oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false imagePullPolicy: IfNotPresent iscsi: chapPassword: \"\" chapUser: \"\" kubeletRootDir: /var/lib/kubelet/ logLevel: info node: affinity: {} labels: {} nodeSelector: {} tolerations: [] registry: quay.io The CSI driver is now ready for use. Next, an HPE storage backend needs to be added along with a StorageClass . Additional information \u00b6 At this point the CSI driver is managed like any other Operator on Kubernetes and the life-cycle management capabilities may be explored further in the official Red Hat OpenShift documentation . Uninstall the HPE CSI Operator \u00b6 When uninstalling an operator managed by OLM, a Cluster Admin must decide whether or not to remove the CustomResourceDefinitions (CRD), APIServices , and resources related to these types owned by the operator. By design, when OLM uninstalls an operator it does not remove any of the operator\u2019s owned CRDs , APIServices , or CRs in order to prevent data loss. Important Do not modify or remove these CRDs or APIServices if you are upgrading or reinstalling the HPE CSI driver in order to prevent data loss. The following are CRDs installed by the HPE CSI driver. hpecsidrivers.storage.hpe.com hpenodeinfos.storage.hpe.com hpereplicationdeviceinfos.storage.hpe.com hpesnapshotgroupinfos.storage.hpe.com hpevolumegroupinfos.storage.hpe.com hpevolumeinfos.storage.hpe.com snapshotgroupclasses.storage.hpe.com snapshotgroupcontents.storage.hpe.com snapshotgroups.storage.hpe.com volumegroupclasses.storage.hpe.com volumegroupcontents.storage.hpe.com volumegroups.storage.hpe.com The following are APIServices installed by the HPE CSI driver. v1.storage.hpe.com v2.storage.hpe.com Please refer to the OLM Lifecycle Manager documentation on how to safely Uninstall your operator . NFS Server Provisioner Considerations \u00b6 When deploying NFS servers on OpenShift there's currently two things to keep in mind for a successful deployment. Also, be understood with the Limitations and Considerations for the NFS Server Provisioner in general. Non-standard hpe-nfs Namespace \u00b6 If NFS servers are deployed in a different Namespace than the default \"hpe-nfs\" by using the \"nfsNamespace\" StorageClass parameter, the \"hpe-csi-nfs-scc\" SCC needs to be updated to include the Namespace ServiceAccount . This example adds \"my-namespace\" NFS server ServiceAccount to the SCC: oc patch scc hpe-csi-nfs-scc --type=json -p='[{\"op\": \"add\", \"path\": \"/users/-\", \"value\": \"system:serviceaccount:my-namespace:hpe-csi-nfs-sa\" }]' Operators Requesting NFS Persistent Volume Claims \u00b6 Object references in OpenShift are not compatible with the NFS Server Provisioner. If a user deploys an Operator of any kind that creates a NFS server backed PVC , the operation will fail. Instead, pre-provision the PVC manually for the Operator instance to use. Use the ext4 filesystem for NFS servers \u00b6 On certain versions of OpenShift the NFS clients may experience stale NFS file handles like the one below when the NFS server is being restarted. Error: failed to resolve symlink \"/var/lib/kubelet/pods/290ff9e1-cc1e-4d05-b884-0ddcc05a9631/volumes/kubernetes.io~csi/pvc-321cf523-c063-4ce4-97e8-bc1365b8a05b/mount\": lstat /var/lib/kubelet/pods/290ff9e1-cc1e-4d05-b884-0ddcc05a9631/volumes/kubernetes.io~csi/pvc-321cf523-c063-4ce4-97e8-bc1365b8a05b/mount: stale NFS file handle If this problem occurs, use the ext4 filesystem on the backing volumes. The fsType is set in the StorageClass . Example: ... parameters: csi.storage.k8s.io/fstype: ext4 ... StorageProfile for OpenShift Virtualization Source PVCs \u00b6 If OpenShift Virtualization is being used and Live Migration is desired for virtual machines PVCs cloned from the \"openshift-virtualization-os-images\" Namespace , the StorageProfile needs to be updated to \"ReadWriteMany\". Info These steps are not necessary on recent OpenShift EUS (v4.12.11 onwards) releases as the default StorageProfile for \"csi.hpe.com\" has been corrected upstream. If the default StorageClass is named \"hpe-standard\", issue the following command: oc edit -n openshift-cnv storageprofile hpe-standard Replace the spec: {} with the following: spec: claimPropertySets: - accessModes: - ReadWriteMany volumeMode: Block Ensure there are no errors. Recreate the OS images: oc delete pvc -n openshift-virtualization-os-images --all Inspect the PVCs and ensure they are re-created with \"RWX\": oc get pvc -n openshift-virtualization-os-images -w Hint The \"accessMode\" transformation for block volumes from RWO PVC to RWX clone has been resolved in HPE CSI Driver v2.5.0. Regardless, using source RWX PVs will simplify the workflows for users. Live VM migrations for Alletra Storage MP \u00b6 With HPE CSI Operator for Kubernetes v2.4.2 and older there's an issue that prevents live migrations of VMs that has PVCs attached that has been clones from an OS image residing on Alletra Storage MP backends including 3PAR, Primera and Alletra 9000. Identify the PVC that that has been cloned from an OS image. The VM name is \"centos7-silver-bedbug-14\" in this case. oc get vm/centos7-silver-bedbug-14 -o jsonpath='{.spec.template.spec.volumes}' | jq In this instance, the dataVolume is the same name as the VM. Grab the PV name from the PVC name. MY_PV_NAME=$(oc get pvc/centos7-silver-bedbug-14 -o jsonpath='{.spec.volumeName}') Next, patch the hpevolumeinfo CRD . oc patch hpevolumeinfo/${MY_PV_NAME} --type=merge --patch '{\"spec\": {\"record\": {\"MultiInitiator\": \"true\"}}}' The VM is now ready to be migrated. Hint If there are multiple dataVolumes , each one needs to be patched. Unsupported Version of the Operator Install \u00b6 In the event on older version of the Operator needs to be installed, the bundle can be installed directly by installing the Operator SDK . Make sure a recent version of the operator-sdk binary is available and that no HPE CSI Driver is currently installed on the cluster. Install a specific version prior and including v2.4.2: operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle:v2.4.2 Install a specific version after and including v2.5.0: operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle-ocp:v2.5.0 Important Once the Operator is installed, a HPECSIDriver instance needs to be created. Follow the steps using the web console or the CLI to create an instance. When the unsupported install isn't needed any longer, run: operator-sdk cleanup -n hpe-storage hpe-csi-operator Unsupported Helm Chart Install \u00b6 In the event Red Hat releases a new version of OpenShift between HPE CSI Driver releases or if interest arises to run the HPE CSI Driver on an uncertified version of OpenShift, it's possible to install the CSI driver using the Helm chart instead. It's not recommended to install the Helm chart unless it's listed as \"Field Tested\" in the support matrix above. Tip Helm chart install is also only current method to use beta releases of the HPE CSI Driver. Steps to install. \u00b6 Follow the steps in the prerequisites to apply the SCC in the Namespace (Project) you wish to install the driver. Install the Helm chart with the steps provided on ArtifactHub . Pay attention to which version combination has been field tested. Unsupported Understand that this method is not supported by Red Hat and not recommended for production workloads or clusters.","title":"Red Hat OpenShift"},{"location":"partners/redhat_openshift/index.html#overview","text":"HPE and Red Hat have a long standing partnership to provide jointly supported software, platform and services with the absolute best customer experience in the industry. Red Hat OpenShift uses open source Kubernetes and various other components to deliver a PaaS experience that benefits both developers and operations. This packaged experience differs slightly on how you would deploy and use the HPE volume drivers and this page serves as the authoritative source for all things HPE primary storage and Red Hat OpenShift. Overview OpenShift 4 Certified combinations Security model Limitations Deployment Upgrading Prerequisites OpenShift web console OpenShift CLI Additional information Uninstall the HPE CSI Operator NFS Server Provisioner Considerations Non-standard hpe-nfs Namespace Operators Requesting NFS Persistent Volume Claims Use the ext4 filesystem for NFS servers StorageProfile for OpenShift Virtualization Source PVCs Live VM migrations for Alletra Storage MP Unsupported Version of the Operator Install Unsupported Helm Chart Install Steps to install.","title":"Overview"},{"location":"partners/redhat_openshift/index.html#openshift_4","text":"Software deployed on OpenShift 4 follows the Operator pattern . CSI drivers are no exception.","title":"OpenShift 4"},{"location":"partners/redhat_openshift/index.html#certified_combinations","text":"Software delivered through the HPE and Red Hat partnership follows a rigorous certification process and only qualify what's listed as \"Certified\" in the below table. Status Red Hat OpenShift HPE CSI Operator Container Storage Providers Certified 4.16 EUS 2 2.5.1 All Certified 4.15 2.4.1, 2.4.2, 2.5.1 All Certified 4.14 EUS 2 2.4.0, 2.4.1, 2.4.2, 2.5.1 All Certified 4.13 2.4.0, 2.4.1, 2.4.2 All Certified 4.12 EUS 2 2.3.0, 2.4.0, 2.4.1, 2.4.2 All EOL 1 4.11 2.3.0 All EOL 1 4.10 EUS 2 2.2.1, 2.3.0 All 1 = End of life support per Red Hat OpenShift Life Cycle Policy . 2 = Red Hat OpenShift Extended Update Support . Check the table above periodically for future releases. Pointers Other combinations may work but will not be supported. Both Red Hat Enterprise Linux and Red Hat CoreOS worker nodes are supported. Instructions on this page only reflect the current stable version of the HPE CSI Operator and OpenShift. OpenShift Virtualization OS images are only supported on PVCs using \"RWX\" with volumeMode: Block . See below for more details.","title":"Certified combinations"},{"location":"partners/redhat_openshift/index.html#security_model","text":"By default, OpenShift prevents containers from running as root. Containers are run using an arbitrarily assigned user ID. Due to these security restrictions, containers that run on Docker and Kubernetes might not run successfully on Red Hat OpenShift without modification. Users deploying applications that require persistent storage (i.e. through the HPE CSI Driver) will need the appropriate permissions and Security Context Constraints (SCC) to be able to request and manage storage through OpenShift. Modifying container security to work with OpenShift is outside the scope of this document. For more information on OpenShift security, see Managing security context constraints . Note If you run into issues writing to persistent volumes provisioned by the HPE CSI Driver under a restricted SCC, add the fsMode: \"0770\" parameter to the StorageClass with RWO claims or fsMode: \"0777\" for RWX claims.","title":"Security model"},{"location":"partners/redhat_openshift/index.html#limitations","text":"Since the CSI Operator only provides \"Basic Install\" capabilities. The following limitations apply: The ConfigMap \"hpe-linux-config\" that controls host configuration is immutable The NFS Server Provisioner can not be used with Operators deploying PersistentVolumeClaims as part of the installation. See #295 on GitHub. Deploying the NFS Server Provisioner to a Namespace other than \"hpe-nfs\" requires a separate SCC applied to the Namespace . See #nfs_server_provisioner_considerations .","title":"Limitations"},{"location":"partners/redhat_openshift/index.html#deployment","text":"The HPE CSI Operator for Kubernetes needs to be installed through the interfaces provided by Red Hat. Do not follow the instructions found on OperatorHub.io. Tip There's a tutorial available on YouTube accessible through the Video Gallery on how to install and use the HPE CSI Operator on Red Hat OpenShift.","title":"Deployment"},{"location":"partners/redhat_openshift/index.html#upgrading","text":"In situations where the operator needs to be upgraded, follow the prerequisite steps in the Helm chart on Artifact Hub. Upgrading the chart Automatic Updates Do not under any circumstance enable \"Automatic Updates\" for the HPE CSI Operator for Kubernetes Once the steps have been followed for the particular version transition: Uninstall the HPECSIDriver instance Delete the \"hpecsidrivers.storage.hpe.com\" CRD : oc delete crd/hpecsidrivers.storage.hpe.com Uninstall the HPE CSI Operator for Kubernetes Proceed to installation through the OpenShift Web Console or OpenShift CLI Reapply the SCC to ensure there hasn't been any changes. Good to know Deleting the HPECSIDriver instance and uninstalling the CSI Operator does not affect any running workloads, PersistentVolumeClaims , StorageClasses or other API resources created by the CSI Operator. In-flight operations and new requests will be retried once the new HPECSIDriver has been instantiated.","title":"Upgrading"},{"location":"partners/redhat_openshift/index.html#prerequisites","text":"The HPE CSI Driver needs to run in privileged mode and needs access to host ports, host network and should be able to mount hostPath volumes. Hence, before deploying HPE CSI Operator on OpenShift, please create the following SecurityContextConstraints (SCC) to allow the CSI driver to be running with these privileges. oc new-project hpe-storage --display-name=\"HPE CSI Driver for Kubernetes\" Important The rest of this implementation guide assumes the default \"hpe-storage\" Namespace . If a different Namespace is desired. Update the ServiceAccount Namespace in the SCC below. Deploy or download the SCC: oc apply -f https://scod.hpedev.io/partners/redhat_openshift/examples/scc/hpe-csi-scc.yaml securitycontextconstraints.security.openshift.io/hpe-csi-controller-scc created securitycontextconstraints.security.openshift.io/hpe-csi-node-scc created securitycontextconstraints.security.openshift.io/hpe-csi-csp-scc created securitycontextconstraints.security.openshift.io/hpe-csi-nfs-scc created","title":"Prerequisites"},{"location":"partners/redhat_openshift/index.html#openshift_web_console","text":"Once the SCC has been applied to the project, login to the OpenShift web console as kube:admin and navigate to Operators -> OperatorHub . Search for 'HPE CSI' in the search field and select the non-marketplace version. Click 'Install'. Note Latest supported HPE CSI Operator on OpenShift 4.14 is 2.4.2 Select the Namespace where the SCC was applied, select 'Manual' Update Approval, click 'Install'. Click 'Approve' to finalize installation of the Operator The HPE CSI Operator is now installed, select 'View Operator'. Click 'Create Instance'. Normally, no customizations are needed, scroll all the way down and click 'Create'. By navigating to the Developer view, it should now be possible to inspect the CSI driver and Operator topology. The CSI driver is now ready for use. Next, an HPE storage backend needs to be added along with a StorageClass . See Caveats below for information on creating StorageClasses in Red Hat OpenShift.","title":"OpenShift web console"},{"location":"partners/redhat_openshift/index.html#openshift_cli","text":"This provides an example Operator deployment using oc . If you want to use the web console, proceed to the previous section . It's assumed the SCC has been applied to the project and have kube:admin privileges. As an example, we'll deploy to the hpe-storage project as described in previous steps. First, an OperatorGroup needs to be created. apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: hpe-csi-driver-for-kubernetes namespace: hpe-storage spec: targetNamespaces: - hpe-storage Next, create a Subscription to the Operator. apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hpe-csi-operator namespace: hpe-storage spec: channel: stable installPlanApproval: Manual name: hpe-csi-operator source: certified-operators sourceNamespace: openshift-marketplace Next, approve the installation. oc -n hpe-storage patch $(oc get installplans -n hpe-storage -o name) -p '{\"spec\":{\"approved\":true}}' --type merge The Operator will now be installed on the OpenShift cluster. Before instantiating a CSI driver, watch the roll-out of the Operator. oc rollout status deploy/hpe-csi-driver-operator -n hpe-storage Waiting for deployment \"hpe-csi-driver-operator\" rollout to finish: 0 of 1 updated replicas are available... deployment \"hpe-csi-driver-operator\" successfully rolled out The next step is to create a HPECSIDriver object. HPE CSI Operator v2.5.1 # oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.5.1-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableHostDeletion: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false disableNodeMonitor: false imagePullPolicy: IfNotPresent images: csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1 csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0 csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7 csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0 csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1 csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1 csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1 csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1 csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6 csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6 csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6 nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5 nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0 primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0 iscsi: chapSecretName: \"\" kubeletRootDir: /var/lib/kubelet logLevel: info node: affinity: {} labels: {} nodeSelector: {} resources: limits: cpu: 2000m memory: 1Gi requests: cpu: 100m memory: 128Mi tolerations: [] v2.4.2 # oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.2-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false imagePullPolicy: IfNotPresent iscsi: chapPassword: \"\" chapUser: \"\" kubeletRootDir: /var/lib/kubelet/ logLevel: info node: affinity: {} labels: {} nodeSelector: {} tolerations: [] registry: quay.io v2.4.1 # oc apply -f https://scod.hpedev.io/csi_driver/examples/deployment/hpecsidriver-v2.4.1-sample.yaml apiVersion: storage.hpe.com/v1 kind: HPECSIDriver metadata: name: hpecsidriver-sample spec: # Default values copied from /helm-charts/hpe-csi-driver/values.yaml controller: affinity: {} labels: {} nodeSelector: {} tolerations: [] csp: affinity: {} labels: {} nodeSelector: {} tolerations: [] disable: alletra6000: false alletra9000: false alletraStorageMP: false nimble: false primera: false disableNodeConfiguration: false disableNodeConformance: false disableNodeGetVolumeStats: false imagePullPolicy: IfNotPresent iscsi: chapPassword: \"\" chapUser: \"\" kubeletRootDir: /var/lib/kubelet/ logLevel: info node: affinity: {} labels: {} nodeSelector: {} tolerations: [] registry: quay.io The CSI driver is now ready for use. Next, an HPE storage backend needs to be added along with a StorageClass .","title":"OpenShift CLI"},{"location":"partners/redhat_openshift/index.html#additional_information","text":"At this point the CSI driver is managed like any other Operator on Kubernetes and the life-cycle management capabilities may be explored further in the official Red Hat OpenShift documentation .","title":"Additional information"},{"location":"partners/redhat_openshift/index.html#uninstall_the_hpe_csi_operator","text":"When uninstalling an operator managed by OLM, a Cluster Admin must decide whether or not to remove the CustomResourceDefinitions (CRD), APIServices , and resources related to these types owned by the operator. By design, when OLM uninstalls an operator it does not remove any of the operator\u2019s owned CRDs , APIServices , or CRs in order to prevent data loss. Important Do not modify or remove these CRDs or APIServices if you are upgrading or reinstalling the HPE CSI driver in order to prevent data loss. The following are CRDs installed by the HPE CSI driver. hpecsidrivers.storage.hpe.com hpenodeinfos.storage.hpe.com hpereplicationdeviceinfos.storage.hpe.com hpesnapshotgroupinfos.storage.hpe.com hpevolumegroupinfos.storage.hpe.com hpevolumeinfos.storage.hpe.com snapshotgroupclasses.storage.hpe.com snapshotgroupcontents.storage.hpe.com snapshotgroups.storage.hpe.com volumegroupclasses.storage.hpe.com volumegroupcontents.storage.hpe.com volumegroups.storage.hpe.com The following are APIServices installed by the HPE CSI driver. v1.storage.hpe.com v2.storage.hpe.com Please refer to the OLM Lifecycle Manager documentation on how to safely Uninstall your operator .","title":"Uninstall the HPE CSI Operator"},{"location":"partners/redhat_openshift/index.html#nfs_server_provisioner_considerations","text":"When deploying NFS servers on OpenShift there's currently two things to keep in mind for a successful deployment. Also, be understood with the Limitations and Considerations for the NFS Server Provisioner in general.","title":"NFS Server Provisioner Considerations"},{"location":"partners/redhat_openshift/index.html#non-standard_hpe-nfs_namespace","text":"If NFS servers are deployed in a different Namespace than the default \"hpe-nfs\" by using the \"nfsNamespace\" StorageClass parameter, the \"hpe-csi-nfs-scc\" SCC needs to be updated to include the Namespace ServiceAccount . This example adds \"my-namespace\" NFS server ServiceAccount to the SCC: oc patch scc hpe-csi-nfs-scc --type=json -p='[{\"op\": \"add\", \"path\": \"/users/-\", \"value\": \"system:serviceaccount:my-namespace:hpe-csi-nfs-sa\" }]'","title":"Non-standard hpe-nfs Namespace"},{"location":"partners/redhat_openshift/index.html#operators_requesting_nfs_persistent_volume_claims","text":"Object references in OpenShift are not compatible with the NFS Server Provisioner. If a user deploys an Operator of any kind that creates a NFS server backed PVC , the operation will fail. Instead, pre-provision the PVC manually for the Operator instance to use.","title":"Operators Requesting NFS Persistent Volume Claims"},{"location":"partners/redhat_openshift/index.html#use_the_ext4_filesystem_for_nfs_servers","text":"On certain versions of OpenShift the NFS clients may experience stale NFS file handles like the one below when the NFS server is being restarted. Error: failed to resolve symlink \"/var/lib/kubelet/pods/290ff9e1-cc1e-4d05-b884-0ddcc05a9631/volumes/kubernetes.io~csi/pvc-321cf523-c063-4ce4-97e8-bc1365b8a05b/mount\": lstat /var/lib/kubelet/pods/290ff9e1-cc1e-4d05-b884-0ddcc05a9631/volumes/kubernetes.io~csi/pvc-321cf523-c063-4ce4-97e8-bc1365b8a05b/mount: stale NFS file handle If this problem occurs, use the ext4 filesystem on the backing volumes. The fsType is set in the StorageClass . Example: ... parameters: csi.storage.k8s.io/fstype: ext4 ...","title":"Use the ext4 filesystem for NFS servers"},{"location":"partners/redhat_openshift/index.html#storageprofile_for_openshift_virtualization_source_pvcs","text":"If OpenShift Virtualization is being used and Live Migration is desired for virtual machines PVCs cloned from the \"openshift-virtualization-os-images\" Namespace , the StorageProfile needs to be updated to \"ReadWriteMany\". Info These steps are not necessary on recent OpenShift EUS (v4.12.11 onwards) releases as the default StorageProfile for \"csi.hpe.com\" has been corrected upstream. If the default StorageClass is named \"hpe-standard\", issue the following command: oc edit -n openshift-cnv storageprofile hpe-standard Replace the spec: {} with the following: spec: claimPropertySets: - accessModes: - ReadWriteMany volumeMode: Block Ensure there are no errors. Recreate the OS images: oc delete pvc -n openshift-virtualization-os-images --all Inspect the PVCs and ensure they are re-created with \"RWX\": oc get pvc -n openshift-virtualization-os-images -w Hint The \"accessMode\" transformation for block volumes from RWO PVC to RWX clone has been resolved in HPE CSI Driver v2.5.0. Regardless, using source RWX PVs will simplify the workflows for users.","title":"StorageProfile for OpenShift Virtualization Source PVCs"},{"location":"partners/redhat_openshift/index.html#live_vm_migrations_for_alletra_storage_mp","text":"With HPE CSI Operator for Kubernetes v2.4.2 and older there's an issue that prevents live migrations of VMs that has PVCs attached that has been clones from an OS image residing on Alletra Storage MP backends including 3PAR, Primera and Alletra 9000. Identify the PVC that that has been cloned from an OS image. The VM name is \"centos7-silver-bedbug-14\" in this case. oc get vm/centos7-silver-bedbug-14 -o jsonpath='{.spec.template.spec.volumes}' | jq In this instance, the dataVolume is the same name as the VM. Grab the PV name from the PVC name. MY_PV_NAME=$(oc get pvc/centos7-silver-bedbug-14 -o jsonpath='{.spec.volumeName}') Next, patch the hpevolumeinfo CRD . oc patch hpevolumeinfo/${MY_PV_NAME} --type=merge --patch '{\"spec\": {\"record\": {\"MultiInitiator\": \"true\"}}}' The VM is now ready to be migrated. Hint If there are multiple dataVolumes , each one needs to be patched.","title":"Live VM migrations for Alletra Storage MP"},{"location":"partners/redhat_openshift/index.html#unsupported_version_of_the_operator_install","text":"In the event on older version of the Operator needs to be installed, the bundle can be installed directly by installing the Operator SDK . Make sure a recent version of the operator-sdk binary is available and that no HPE CSI Driver is currently installed on the cluster. Install a specific version prior and including v2.4.2: operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle:v2.4.2 Install a specific version after and including v2.5.0: operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle-ocp:v2.5.0 Important Once the Operator is installed, a HPECSIDriver instance needs to be created. Follow the steps using the web console or the CLI to create an instance. When the unsupported install isn't needed any longer, run: operator-sdk cleanup -n hpe-storage hpe-csi-operator","title":"Unsupported Version of the Operator Install"},{"location":"partners/redhat_openshift/index.html#unsupported_helm_chart_install","text":"In the event Red Hat releases a new version of OpenShift between HPE CSI Driver releases or if interest arises to run the HPE CSI Driver on an uncertified version of OpenShift, it's possible to install the CSI driver using the Helm chart instead. It's not recommended to install the Helm chart unless it's listed as \"Field Tested\" in the support matrix above. Tip Helm chart install is also only current method to use beta releases of the HPE CSI Driver.","title":"Unsupported Helm Chart Install"},{"location":"partners/redhat_openshift/index.html#steps_to_install","text":"Follow the steps in the prerequisites to apply the SCC in the Namespace (Project) you wish to install the driver. Install the Helm chart with the steps provided on ArtifactHub . Pay attention to which version combination has been field tested. Unsupported Understand that this method is not supported by Red Hat and not recommended for production workloads or clusters.","title":"Steps to install."},{"location":"partners/suse_harvester/index.html","text":"Overview \u00b6 \"Harvester is a modern hyperconverged infrastructure (HCI) solution built for bare metal servers using enterprise-grade open-source technologies including Linux, KVM, Kubernetes, KubeVirt, and Longhorn. Designed for users looking for a flexible and affordable solution to run cloud-native and virtual machine (VM) workloads in your datacenter and at the edge, Harvester provides a single pane of glass for virtualization and cloud-native workload management.\" 1 1 = quote from HarvesterHCI.io . HPE supports the underlying host OS, SLE Micro, using the HPE CSI Driver for Kubernetes and the Rancher Kubernetes Engine 2 (RKE2) which is a CNCF certified Kubernetes distribution. Harvester embeds KubeVirt and uses standard CSI storage contructs to manage storage resoruces for virtual machines. Overview Deployment Considerations Limitations Boot from Longhorn iSCSI Networking Example iSCSI Configuration Installing HPE CSI Driver for Kubernetes Deployment Considerations \u00b6 Many of the features provided by Harvester stem from the capabilities of KubeVirt. The HPE CSI Driver for Kubernetes provides \"ReadWriteMany\" block storage which allows seamless migration of VMs between hosts with disks attached. The NFS Server Provisioner may be used by disparate VMs that needs \"ReadWriteMany\" to share data. Limitations \u00b6 These limitatons are framed around the integration of the HPE CSI Driver for Kubernetes and Harvester. Other limitations may apply. Boot from Longhorn \u00b6 Since Harvester is a hyper-converged infrastructure platform in its own right, the storage components are already embedded in the platform using Longhorn. Longhorn is designed to run from local server storage and today it's not practical to replace Longhorn with CSI capable storage from HPE. The Harvester servers may use boot from SAN and other means in terms of external storage to provide capacity to Longhorn but Longhorn would still be used to create VM images and machines. Storage provided by platforms supported by the HPE CSI Driver for Kubernetes is complementary and non-boot disks may be easily provisioned and attached to VM workloads. Info The VM boot limitation is solely implemented by Harvester in front of KubeVirt. Any other KubeVirt platform would allow booting from storage resources provided by HPE CSI Driver for Kubernetes. iSCSI Networking \u00b6 As per best practice HPE recommends using dedicated iSCSI networks for data traffic between the Harvester nodes and the storage platform. Ancillary network configuration of Harvester nodes is managed as a post-install step. Creating network configuration files for Harvester nodes is beyond the scope of this document. Follow the guides provided by Harvester. Update Harvester Configuration After Installation Example iSCSI Configuration \u00b6 In a typical setup the IP addresses are assigned by DHCP on the NIC directly without any bridges, VLANs or bonds. The updates that needs to be done to /oem/90_custom.yaml on each compute node to reflect this configuration are described below. Insert the block after the management interface configuration and replace the interface names ens224 and ens256 with the actual interface names on your compute nodes. List the available interfaces on the compute node prompt with ip link . ... - path: /etc/sysconfig/network/ifcfg-ens224 permissions: 384 owner: 0 group: 0 content: | STARTMODE='onboot' BOOTPROTO='dhcp' DHCLIENT_SET_DEFAULT_ROUTE='no' encoding: \"\" ownerstring: \"\" - path: /etc/sysconfig/network/ifcfg-ens256 permissions: 384 owner: 0 group: 0 content: | STARTMODE='onboot' BOOTPROTO='dhcp' DHCLIENT_SET_DEFAULT_ROUTE='no' encoding: \"\" ownerstring: \"\" ... Reboot the node and verify that IP addresses have been assigned to the NICs by running ip addr show dev on the compute node prompt. Installing HPE CSI Driver for Kubernetes \u00b6 The HPE CSI Driver for Kubernetes is installed on Harvester by using the standard procedures for installing the CSI driver with Helm . Helm require access to the Harvester cluster through the Kubernetes API. You can download the Harvester cluster KubeConfig file by visiting the dashboard on your cluster and click \"support\" in the lower left corner of the UI. Note It does not matter if Harvester is managed by Rancher or running standalone. If the cluster is managed by Rancher, then go to the Virtualization Management dashboard and select \"Download KubeConfig\" in the dotted context menu of the cluster.","title":"SUSE Harvester"},{"location":"partners/suse_harvester/index.html#overview","text":"\"Harvester is a modern hyperconverged infrastructure (HCI) solution built for bare metal servers using enterprise-grade open-source technologies including Linux, KVM, Kubernetes, KubeVirt, and Longhorn. Designed for users looking for a flexible and affordable solution to run cloud-native and virtual machine (VM) workloads in your datacenter and at the edge, Harvester provides a single pane of glass for virtualization and cloud-native workload management.\" 1 1 = quote from HarvesterHCI.io . HPE supports the underlying host OS, SLE Micro, using the HPE CSI Driver for Kubernetes and the Rancher Kubernetes Engine 2 (RKE2) which is a CNCF certified Kubernetes distribution. Harvester embeds KubeVirt and uses standard CSI storage contructs to manage storage resoruces for virtual machines. Overview Deployment Considerations Limitations Boot from Longhorn iSCSI Networking Example iSCSI Configuration Installing HPE CSI Driver for Kubernetes","title":"Overview"},{"location":"partners/suse_harvester/index.html#deployment_considerations","text":"Many of the features provided by Harvester stem from the capabilities of KubeVirt. The HPE CSI Driver for Kubernetes provides \"ReadWriteMany\" block storage which allows seamless migration of VMs between hosts with disks attached. The NFS Server Provisioner may be used by disparate VMs that needs \"ReadWriteMany\" to share data.","title":"Deployment Considerations"},{"location":"partners/suse_harvester/index.html#limitations","text":"These limitatons are framed around the integration of the HPE CSI Driver for Kubernetes and Harvester. Other limitations may apply.","title":"Limitations"},{"location":"partners/suse_harvester/index.html#boot_from_longhorn","text":"Since Harvester is a hyper-converged infrastructure platform in its own right, the storage components are already embedded in the platform using Longhorn. Longhorn is designed to run from local server storage and today it's not practical to replace Longhorn with CSI capable storage from HPE. The Harvester servers may use boot from SAN and other means in terms of external storage to provide capacity to Longhorn but Longhorn would still be used to create VM images and machines. Storage provided by platforms supported by the HPE CSI Driver for Kubernetes is complementary and non-boot disks may be easily provisioned and attached to VM workloads. Info The VM boot limitation is solely implemented by Harvester in front of KubeVirt. Any other KubeVirt platform would allow booting from storage resources provided by HPE CSI Driver for Kubernetes.","title":"Boot from Longhorn"},{"location":"partners/suse_harvester/index.html#iscsi_networking","text":"As per best practice HPE recommends using dedicated iSCSI networks for data traffic between the Harvester nodes and the storage platform. Ancillary network configuration of Harvester nodes is managed as a post-install step. Creating network configuration files for Harvester nodes is beyond the scope of this document. Follow the guides provided by Harvester. Update Harvester Configuration After Installation","title":"iSCSI Networking"},{"location":"partners/suse_harvester/index.html#example_iscsi_configuration","text":"In a typical setup the IP addresses are assigned by DHCP on the NIC directly without any bridges, VLANs or bonds. The updates that needs to be done to /oem/90_custom.yaml on each compute node to reflect this configuration are described below. Insert the block after the management interface configuration and replace the interface names ens224 and ens256 with the actual interface names on your compute nodes. List the available interfaces on the compute node prompt with ip link . ... - path: /etc/sysconfig/network/ifcfg-ens224 permissions: 384 owner: 0 group: 0 content: | STARTMODE='onboot' BOOTPROTO='dhcp' DHCLIENT_SET_DEFAULT_ROUTE='no' encoding: \"\" ownerstring: \"\" - path: /etc/sysconfig/network/ifcfg-ens256 permissions: 384 owner: 0 group: 0 content: | STARTMODE='onboot' BOOTPROTO='dhcp' DHCLIENT_SET_DEFAULT_ROUTE='no' encoding: \"\" ownerstring: \"\" ... Reboot the node and verify that IP addresses have been assigned to the NICs by running ip addr show dev on the compute node prompt.","title":"Example iSCSI Configuration"},{"location":"partners/suse_harvester/index.html#installing_hpe_csi_driver_for_kubernetes","text":"The HPE CSI Driver for Kubernetes is installed on Harvester by using the standard procedures for installing the CSI driver with Helm . Helm require access to the Harvester cluster through the Kubernetes API. You can download the Harvester cluster KubeConfig file by visiting the dashboard on your cluster and click \"support\" in the lower left corner of the UI. Note It does not matter if Harvester is managed by Rancher or running standalone. If the cluster is managed by Rancher, then go to the Virtualization Management dashboard and select \"Download KubeConfig\" in the dotted context menu of the cluster.","title":"Installing HPE CSI Driver for Kubernetes"},{"location":"partners/suse_rancher/index.html","text":"Overview \u00b6 SUSE Rancher provides a platform to deploy Kubernetes-as-a-service everywhere. HPE partners with SUSE Rancher to provide effortless management of the CSI driver on managed Kubernetes clusters. This allows our joint customers and channel partners to enable hybrid cloud stateful workloads on Kubernetes. Overview Deployment considerations Supported versions HPE CSI Driver for Kubernetes Rancher Cluster Manager (2.6 and newer) Post install steps Ancillary HPE Storage Apps Deployment considerations \u00b6 Rancher is capable of managing Kubernetes across a broad spectrum of managed and BYO clusters. It's important to understand that the HPE CSI Driver for Kubernetes does not support the same amount of combinations Rancher does. Consult the support matrix on the CSI driver overview page for the supported combinations of the HPE CSI Driver, Kubernetes and supported node operating systems. Supported versions \u00b6 Rancher uses Helm to deploy and manage partner software. The concept of a Helm repository in Rancher is organized under \"Apps\" in the Rancher UI. The HPE CSI Driver for Kubernetes is a partner solution present in the official Partner repository. Rancher release Install methods Recommended CSI driver 2.7 Cluster Manager App Chart latest 2.8 Cluster Manager App Chart latest Tip Learn more about Helm Charts and Apps in the Rancher documentation HPE CSI Driver for Kubernetes \u00b6 The HPE CSI Driver is part of the official Partner repository in Rancher. The CSI driver is deployed on managed Kubernetes clusters like any ordinary \"App\" in Rancher. Note In Rancher 2.5 an \"Apps & Marketplace\" component was introduced in the new \"Cluster Explorer\" interface. This is the new interface moving forward. Upcoming releases of the HPE CSI Driver for Kubernetes will only support installation via \"Apps & Marketplace\". Rancher Cluster Manager (2.6 and newer) \u00b6 Navigate to \"Apps\" and select \"Charts\", search for \"HPE\". Rancher Cluster Explorer Post install steps \u00b6 For Rancher workloads to make use of persistent storage from HPE, a supported backend needs to be configured with a Secret along with a StorageClass . These procedures are generic regardless of Kubernetes distribution and install method being used. Go ahead and add an HPE storage backend Ancillary HPE Storage Apps \u00b6 Introduced in Rancher v2.7 and HPE CSI Driver for Kubernetes v2.3.0 is the ability to deploy the HPE Storage Array Exporter for Prometheus and HPE CSI Info Metrics Provider for Prometheus directly from the same Rancher Apps interface. These Helm charts have been enhanced to include support for Rancher Monitoring. Tip Make sure to tick \"Enable ServiceMonitor\" in the \"ServiceMonitor settings\" when configuring the ancillary Prometheus apps to work with Rancher Monitoring.","title":"SUSE Rancher"},{"location":"partners/suse_rancher/index.html#overview","text":"SUSE Rancher provides a platform to deploy Kubernetes-as-a-service everywhere. HPE partners with SUSE Rancher to provide effortless management of the CSI driver on managed Kubernetes clusters. This allows our joint customers and channel partners to enable hybrid cloud stateful workloads on Kubernetes. Overview Deployment considerations Supported versions HPE CSI Driver for Kubernetes Rancher Cluster Manager (2.6 and newer) Post install steps Ancillary HPE Storage Apps","title":"Overview"},{"location":"partners/suse_rancher/index.html#deployment_considerations","text":"Rancher is capable of managing Kubernetes across a broad spectrum of managed and BYO clusters. It's important to understand that the HPE CSI Driver for Kubernetes does not support the same amount of combinations Rancher does. Consult the support matrix on the CSI driver overview page for the supported combinations of the HPE CSI Driver, Kubernetes and supported node operating systems.","title":"Deployment considerations"},{"location":"partners/suse_rancher/index.html#supported_versions","text":"Rancher uses Helm to deploy and manage partner software. The concept of a Helm repository in Rancher is organized under \"Apps\" in the Rancher UI. The HPE CSI Driver for Kubernetes is a partner solution present in the official Partner repository. Rancher release Install methods Recommended CSI driver 2.7 Cluster Manager App Chart latest 2.8 Cluster Manager App Chart latest Tip Learn more about Helm Charts and Apps in the Rancher documentation","title":"Supported versions"},{"location":"partners/suse_rancher/index.html#hpe_csi_driver_for_kubernetes","text":"The HPE CSI Driver is part of the official Partner repository in Rancher. The CSI driver is deployed on managed Kubernetes clusters like any ordinary \"App\" in Rancher. Note In Rancher 2.5 an \"Apps & Marketplace\" component was introduced in the new \"Cluster Explorer\" interface. This is the new interface moving forward. Upcoming releases of the HPE CSI Driver for Kubernetes will only support installation via \"Apps & Marketplace\".","title":"HPE CSI Driver for Kubernetes"},{"location":"partners/suse_rancher/index.html#rancher_cluster_manager_26_and_newer","text":"Navigate to \"Apps\" and select \"Charts\", search for \"HPE\". Rancher Cluster Explorer","title":"Rancher Cluster Manager (2.6 and newer)"},{"location":"partners/suse_rancher/index.html#post_install_steps","text":"For Rancher workloads to make use of persistent storage from HPE, a supported backend needs to be configured with a Secret along with a StorageClass . These procedures are generic regardless of Kubernetes distribution and install method being used. Go ahead and add an HPE storage backend","title":"Post install steps"},{"location":"partners/suse_rancher/index.html#ancillary_hpe_storage_apps","text":"Introduced in Rancher v2.7 and HPE CSI Driver for Kubernetes v2.3.0 is the ability to deploy the HPE Storage Array Exporter for Prometheus and HPE CSI Info Metrics Provider for Prometheus directly from the same Rancher Apps interface. These Helm charts have been enhanced to include support for Rancher Monitoring. Tip Make sure to tick \"Enable ServiceMonitor\" in the \"ServiceMonitor settings\" when configuring the ancillary Prometheus apps to work with Rancher Monitoring.","title":"Ancillary HPE Storage Apps"},{"location":"partners/tkgi/index.html","text":"Overview \u00b6 VMware Tanzu Kubernetes Grid Integrated Engine (TKGI) is supported by the HPE CSI Driver for Kubernetes. Partnership \u00b6 VMware and HPE have a long standing partnership across each of the product portfolios. Allowing TKGI users to access persistent storage with the HPE CSI Driver accelerates stateful workload performance, scalability and efficiency. Learn more about the partnership and enablement on the VMware Marketplace . Prerequisites \u00b6 It's important to verify that the host OS and Kubernetes version is supported by the HPE CSI Driver. Only iSCSI is supported ( learn why ) Ensure \"Enable Privileged Containers\" is ticked in the TKGI cluster deployment plan Verify versions in the Compatibility and Support table Installation \u00b6 It's highly recommended to use the Helm chart to install the CSI driver as it's required to apply different \"kubeletRootDir\" than the default for the driver to start and work properly. Example workflow. helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/ kubectl create ns hpe-storage helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage \\ --set kubeletRootDir=/var/vcap/data/kubelet Seealso Learn more about the supported parameters of the Helm chart on ArtifactHub . Post Install Steps \u00b6 For TKGI workloads to make use of persistent storage from HPE, a supported backend needs to be configured along with a StorageClass . These procedures are generic regardless of Kubernetes distribution being used. Go ahead and add an HPE storage backend","title":"Tanzu Kubernetes Grid Integrated"},{"location":"partners/tkgi/index.html#overview","text":"VMware Tanzu Kubernetes Grid Integrated Engine (TKGI) is supported by the HPE CSI Driver for Kubernetes.","title":"Overview"},{"location":"partners/tkgi/index.html#partnership","text":"VMware and HPE have a long standing partnership across each of the product portfolios. Allowing TKGI users to access persistent storage with the HPE CSI Driver accelerates stateful workload performance, scalability and efficiency. Learn more about the partnership and enablement on the VMware Marketplace .","title":"Partnership"},{"location":"partners/tkgi/index.html#prerequisites","text":"It's important to verify that the host OS and Kubernetes version is supported by the HPE CSI Driver. Only iSCSI is supported ( learn why ) Ensure \"Enable Privileged Containers\" is ticked in the TKGI cluster deployment plan Verify versions in the Compatibility and Support table","title":"Prerequisites"},{"location":"partners/tkgi/index.html#installation","text":"It's highly recommended to use the Helm chart to install the CSI driver as it's required to apply different \"kubeletRootDir\" than the default for the driver to start and work properly. Example workflow. helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/ kubectl create ns hpe-storage helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver -n hpe-storage \\ --set kubeletRootDir=/var/vcap/data/kubelet Seealso Learn more about the supported parameters of the Helm chart on ArtifactHub .","title":"Installation"},{"location":"partners/tkgi/index.html#post_install_steps","text":"For TKGI workloads to make use of persistent storage from HPE, a supported backend needs to be configured along with a StorageClass . These procedures are generic regardless of Kubernetes distribution being used. Go ahead and add an HPE storage backend","title":"Post Install Steps"},{"location":"partners/vmware/index.html","text":"VMware vSphere Container Storage Plug-in \u00b6 VMware vSphere Container Storage Plug-in also known as the upstream vSphere CSI Driver exposes vSphere storage and features to Kubernetes users and was introduced in vSphere 6.7 U3. The term Cloud Native Storage (CNS) is the vCenter abstraction point and is made up of two parts, a Container Storage Interface (CSI) driver for Kubernetes used to provision storage on vSphere and the CNS Control Plane within vCenter allowing visibility to persistent volumes through the CNS UI within vCenter. CNS fully supports Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Profiles. This storage profile can have multiple array capabilities and data services, depending on the underlying storage you use. HPE primary storage (HPE GreenLake for Block Storage, Primera, Nimble Storage, Nimble Storage dHCI, and 3PAR) has the largest user base of vVols in the market, due to its simplicity to deploy and ease of use. VMware vSphere Container Storage Plug-in Feature Comparison Important Considerations Deployment Support Feature Comparison \u00b6 Volume parameters available to the vSphere Container Storage Plug-in will be dependent upon options exposed through the vSphere SPBM and may not include all volume features available. Please refer to the HPE Primera: VMware ESXi Implementation Guide (includes HPE Alletra Storage MP, Alletra 9000 and 3PAR) or VMware vSphere Virtual Volumes on HPE Nimble Storage Implementation Guide (includes HPE Alletra 5000/6000 and dHCI) for list of available features. For a list of available volume parameters in the HPE CSI Driver for Kubernetes, refer to the respective CSP . Feature HPE CSI Driver vSphere Container Storage Plug-in vCenter Cloud Native Storage (CNS) UI Support No GA Dynamic Block PV Provisioning (ReadWriteOnce access mode) GA GA (vVOL) Dynamic File Provisioning (ReadWriteMany access mode) GA GA (vSan Only) Volume Snapshots (CSI) GA GA (vSphere 7.0u3) Volume Cloning from VolumeSnapshot (CSI) GA GA Volume Cloning from PVC (CSI) GA GA Volume Expansion (CSI) GA GA (vSphere 7.0u2) RWO Raw Block Volume (CSI) GA GA RWX/ROX Raw Block Volume (CSI) GA No Generic Ephemeral Volumes (CSI) GA GA Inline Ephemeral Volumes (CSI) GA No Topology (CSI) No GA Volume Health (CSI) No GA (vSan only) CSI Controller multiple replica support No GA Windows support No GA Volume Encryption GA GA (via VMcrypt) Volume Mutator 1 GA No Volume Groups 1 GA No Snapshot Groups 1 GA No Peer Persistence Replication 3 GA No 4 1 = Feature comparison based upon HPE CSI Driver for Kubernetes 2.4.0 and the vSphere Container Storage Plug-in 3.1.2 2 = HPE and VMware fully support features listed as GA for their respective CSI drivers. 3 = The HPE Remote Copy Peer Persistence feature of the HPE CSI Driver for Kubernetes is only available with HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR storage systems. 4 = Peer Persistence is an HPE Storage specific platform feature that isn't abstracted up to the vSphere Container Storage Plug-in. Peer Persistence works with the vSphere Container Storage Plug-in when using VMFS datastores. Please refer to Compatibility Matrices for vSphere Container Storage Plug-in for the most up-to-date information. Important Considerations \u00b6 The HPE CSI Driver for Kubernetes is only supported on specific versions of worker node operating systems and Kubernetes versions, these requirements applies to any worker VM running on vSphere. Some Kubernetes distributions, when running on vSphere may only support the vSphere Container Storage Plug-in, such an example is VMware Tanzu. Ensure the Kubernetes distribution being used support 3rd party CSI drivers (such as the HPE CSI Driver) and fulfill the requirements in Features and Capabilities before deciding which CSI driver to use. HPE does not test or qualify the vSphere Container Storage Plug-in for any particular storage backend besides point solutions 1 . As long as the storage platform is supported by vSphere, VMware will support the vSphere Container Storage Plug-in. VMware vSphere with Tanzu and HPE Alletra dHCI 1 HPE provides a turnkey solution for Kubernetes using VMware Tanzu and HPE Alletra dHCI. Learn more . Deployment \u00b6 When considering to use block storage within Kubernetes clusters running on VMware, customers need to evaluate which data protocol (FC or iSCSI) is primarily used within their virtualized environment. This will help best determine which CSI driver can be deployed within your Kubernetes clusters. Important Due to limitations when exposing physical hardware (i.e. Fibre Channel Host Bus Adapters) to virtualized guest OSs and if iSCSI is not an available, HPE recommends the use of the VMware vSphere Container Storage Plug-in to deliver block-based persistent storage from HPE GreenLake for Block Storage, Alletra, Primera, Nimble Storage, Nimble Storage dHCI or 3PAR arrays to Kubernetes clusters within VMware environments for customers who are using the Fibre Channel protocol. The HPE CSI Driver for Kubernetes does not support N_Port ID Virtualization (NPIV). Protocol HPE CSI Driver for Kubernetes vSphere Container Storage Plug-in FC Not supported Supported * NVMe-oF Not supported Supported * iSCSI Supported Supported * * = Limited to the SPBM implementation of the underlying storage array. Learn how to deploy the vSphere Container Storage Plug-in: Version 2.x 1 (Kubernetes 1.23-1.25) Version 3.x (Kubernetes 1.24 onwards) 1 = The HPE authored deployment guide for vSphere Container Storage Plug-in 2.4 has been preserved here . Tip Most non-vanilla Kubernetes distributions when deployed on vSphere manage and support the vSphere Container Storage Plug-in directly. That includes Red Hat OpenShift, SUSE Rancher, Charmed Kubernetes (Canonical), Google Anthos and Amazon EKS Anywhere. Support \u00b6 VMware provides enterprise grade support for the vSphere Container Storage Plug-in. Please use VMware Support Services to file a customer support ticket to engage the VMware global support team. For support information on the HPE CSI Driver for Kubernetes, visit Support . For support with other HPE related technologies, visit the Hewlett Packard Enterprise Support Center .","title":"VMware"},{"location":"partners/vmware/index.html#vmware_vsphere_container_storage_plug-in","text":"VMware vSphere Container Storage Plug-in also known as the upstream vSphere CSI Driver exposes vSphere storage and features to Kubernetes users and was introduced in vSphere 6.7 U3. The term Cloud Native Storage (CNS) is the vCenter abstraction point and is made up of two parts, a Container Storage Interface (CSI) driver for Kubernetes used to provision storage on vSphere and the CNS Control Plane within vCenter allowing visibility to persistent volumes through the CNS UI within vCenter. CNS fully supports Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Profiles. This storage profile can have multiple array capabilities and data services, depending on the underlying storage you use. HPE primary storage (HPE GreenLake for Block Storage, Primera, Nimble Storage, Nimble Storage dHCI, and 3PAR) has the largest user base of vVols in the market, due to its simplicity to deploy and ease of use. VMware vSphere Container Storage Plug-in Feature Comparison Important Considerations Deployment Support","title":"VMware vSphere Container Storage Plug-in"},{"location":"partners/vmware/index.html#feature_comparison","text":"Volume parameters available to the vSphere Container Storage Plug-in will be dependent upon options exposed through the vSphere SPBM and may not include all volume features available. Please refer to the HPE Primera: VMware ESXi Implementation Guide (includes HPE Alletra Storage MP, Alletra 9000 and 3PAR) or VMware vSphere Virtual Volumes on HPE Nimble Storage Implementation Guide (includes HPE Alletra 5000/6000 and dHCI) for list of available features. For a list of available volume parameters in the HPE CSI Driver for Kubernetes, refer to the respective CSP . Feature HPE CSI Driver vSphere Container Storage Plug-in vCenter Cloud Native Storage (CNS) UI Support No GA Dynamic Block PV Provisioning (ReadWriteOnce access mode) GA GA (vVOL) Dynamic File Provisioning (ReadWriteMany access mode) GA GA (vSan Only) Volume Snapshots (CSI) GA GA (vSphere 7.0u3) Volume Cloning from VolumeSnapshot (CSI) GA GA Volume Cloning from PVC (CSI) GA GA Volume Expansion (CSI) GA GA (vSphere 7.0u2) RWO Raw Block Volume (CSI) GA GA RWX/ROX Raw Block Volume (CSI) GA No Generic Ephemeral Volumes (CSI) GA GA Inline Ephemeral Volumes (CSI) GA No Topology (CSI) No GA Volume Health (CSI) No GA (vSan only) CSI Controller multiple replica support No GA Windows support No GA Volume Encryption GA GA (via VMcrypt) Volume Mutator 1 GA No Volume Groups 1 GA No Snapshot Groups 1 GA No Peer Persistence Replication 3 GA No 4 1 = Feature comparison based upon HPE CSI Driver for Kubernetes 2.4.0 and the vSphere Container Storage Plug-in 3.1.2 2 = HPE and VMware fully support features listed as GA for their respective CSI drivers. 3 = The HPE Remote Copy Peer Persistence feature of the HPE CSI Driver for Kubernetes is only available with HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR storage systems. 4 = Peer Persistence is an HPE Storage specific platform feature that isn't abstracted up to the vSphere Container Storage Plug-in. Peer Persistence works with the vSphere Container Storage Plug-in when using VMFS datastores. Please refer to Compatibility Matrices for vSphere Container Storage Plug-in for the most up-to-date information.","title":"Feature Comparison"},{"location":"partners/vmware/index.html#important_considerations","text":"The HPE CSI Driver for Kubernetes is only supported on specific versions of worker node operating systems and Kubernetes versions, these requirements applies to any worker VM running on vSphere. Some Kubernetes distributions, when running on vSphere may only support the vSphere Container Storage Plug-in, such an example is VMware Tanzu. Ensure the Kubernetes distribution being used support 3rd party CSI drivers (such as the HPE CSI Driver) and fulfill the requirements in Features and Capabilities before deciding which CSI driver to use. HPE does not test or qualify the vSphere Container Storage Plug-in for any particular storage backend besides point solutions 1 . As long as the storage platform is supported by vSphere, VMware will support the vSphere Container Storage Plug-in. VMware vSphere with Tanzu and HPE Alletra dHCI 1 HPE provides a turnkey solution for Kubernetes using VMware Tanzu and HPE Alletra dHCI. Learn more .","title":"Important Considerations"},{"location":"partners/vmware/index.html#deployment","text":"When considering to use block storage within Kubernetes clusters running on VMware, customers need to evaluate which data protocol (FC or iSCSI) is primarily used within their virtualized environment. This will help best determine which CSI driver can be deployed within your Kubernetes clusters. Important Due to limitations when exposing physical hardware (i.e. Fibre Channel Host Bus Adapters) to virtualized guest OSs and if iSCSI is not an available, HPE recommends the use of the VMware vSphere Container Storage Plug-in to deliver block-based persistent storage from HPE GreenLake for Block Storage, Alletra, Primera, Nimble Storage, Nimble Storage dHCI or 3PAR arrays to Kubernetes clusters within VMware environments for customers who are using the Fibre Channel protocol. The HPE CSI Driver for Kubernetes does not support N_Port ID Virtualization (NPIV). Protocol HPE CSI Driver for Kubernetes vSphere Container Storage Plug-in FC Not supported Supported * NVMe-oF Not supported Supported * iSCSI Supported Supported * * = Limited to the SPBM implementation of the underlying storage array. Learn how to deploy the vSphere Container Storage Plug-in: Version 2.x 1 (Kubernetes 1.23-1.25) Version 3.x (Kubernetes 1.24 onwards) 1 = The HPE authored deployment guide for vSphere Container Storage Plug-in 2.4 has been preserved here . Tip Most non-vanilla Kubernetes distributions when deployed on vSphere manage and support the vSphere Container Storage Plug-in directly. That includes Red Hat OpenShift, SUSE Rancher, Charmed Kubernetes (Canonical), Google Anthos and Amazon EKS Anywhere.","title":"Deployment"},{"location":"partners/vmware/index.html#support","text":"VMware provides enterprise grade support for the vSphere Container Storage Plug-in. Please use VMware Support Services to file a customer support ticket to engage the VMware global support team. For support information on the HPE CSI Driver for Kubernetes, visit Support . For support with other HPE related technologies, visit the Hewlett Packard Enterprise Support Center .","title":"Support"},{"location":"partners/vmware/legacy.html","text":"Deprecated \u00b6 This deployment guide is deprecated. Learn more here . Cloud Native Storage for vSphere \u00b6 Cloud Native Storage (CNS) for vSphere exposes vSphere storage and features to Kubernetes users and was introduced in vSphere 6.7 U3. CNS is made up of two parts, a Container Storage Interface (CSI) driver for Kubernetes used to provision storage on vSphere and the CNS Control Plane within vCenter allowing visibility to persistent volumes through the new CNS UI within vCenter. CNS fully supports Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Profiles. This storage profile can have multiple array capabilities and data services, depending on the underlying storage you use. HPE primary storage (HPE Primera, Nimble Storage, Nimble Storage dHCI, and 3PAR) has the largest user base of vVols in the market, due to its simplicity to deploy and ease of use. Tip Check out the tutorial available on YouTube in the Video Gallery on how to configure and use HPE storage with Cloud Native Storage for vSphere. Watch the video in its entirety or skip to configuring Tanzu with HPE storage or configuring the vSphere CSI Driver with HPE storage . Deprecated Cloud Native Storage for vSphere Feature Comparison Deployment Prerequisites Configuring the VASA provider Configuring a VM Storage Policy Install the vSphere Cloud Provider Interface (CPI) Check for ProviderID Create a CPI ConfigMap Create a CPI Secret Check that all nodes are tainted Deploy the CPI manifests Verify that the CPI has been successfully deployed Install the vSphere Container Storage Interface (CSI) driver Create a configuration file with vSphere credentials Create a Kubernetes Secret for vSphere credentials Create RBAC, vSphere CSI Controller Deployment and vSphere CSI node DaemonSet Verify the vSphere CSI driver deployment Create a StorageClass Validate Create and Deploy a MongoDB Helm chart Verify Cloud Native Storage in vSphere Support Feature Comparison \u00b6 Volume parameters available to the vSphere CSI Driver will be dependent upon options exposed through the vSphere SPBM and may not include all volume features available. Please refer to the HPE Primera: VMware ESXi Implementation Guide or VMware vSphere Virtual Volumes on HPE Nimble Storage Implementation Guide for list of available features. For a list of available volume parameters in the HPE CSI Driver for Kubernetes, refer to the respective CSP . Feature HPE CSI Driver vSphere CSI Driver vCenter Cloud Native Storage (CNS) UI Support No GA Dynamic Block PV Provisioning (ReadWriteOnce access mode) GA GA (vVOL) Dynamic File Provisioning (ReadWriteMany access mode) GA GA (vSan Only) Volume Snapshots (CSI) GA Alpha (2.4.0) Volume Cloning from VolumeSnapshot (CSI) GA No Volume Cloning from PVC (CSI) GA No Volume Expansion (CSI) GA GA (offline only) Raw Block Volume (CSI) GA Alpha Generic Ephemeral Volumes (CSI) GA GA Inline Ephemeral Volumes (CSI) GA No Topology (CSI) No GA Volume Health (CSI) No GA (vSan only) CSI Controller multiple replica support No GA Volume Encryption GA GA (via VMcrypt) Volume Mutator 1 GA No Volume Groups 1 GA No Snapshot Groups 1 GA No Peer Persistence Replication 3 GA No 4 1 = Feature comparison based upon HPE CSI Driver for Kubernetes v2.1.1 and the vSphere CSI Driver v2.4.1 2 = HPE and VMware fully support features listed as GA for their respective CSI drivers. 3 = The HPE Remote Copy Peer Persistence feature of the HPE CSI Driver for Kubernetes is only available with HPE Alletra 9000 and Primera storage systems. 4 = Peer Persistence is an HPE Storage specific platform feature that isn't abstracted up the vSphere CSI Driver. Peer Persistence works with the vSphere CSI Driver when using VMFS datastores. Please refer to vSphere CSI Driver - Supported Features Matrix for the most up-to-date information. Deployment \u00b6 When considering to use block storage within Kubernetes clusters running on VMware, customers need to evaluate which data protocol (FC or iSCSI) is primarily used within their virtualized environment. This will help best determine which CSI driver can be deployed within your Kubernetes clusters. Important Due to limitations when exposing physical hardware (i.e. Fibre Channel Host Bus Adapters) to virtualized guest OSs and if iSCSI is not an available, HPE recommends the use of the VMware vSphere CSI driver to deliver block-based persistent storage from HPE Primera, Nimble Storage, Nimble Storage dHCI or 3PAR arrays to Kubernetes clusters within VMware environments for customers who are using the Fibre Channel protocol. The HPE CSI Driver for Kubernetes does not support N_Port ID Virtualization (NPIV). Protocol HPE CSI Driver for Kubernetes vSphere CSI driver FC Not supported Supported* iSCSI Supported Supported* * = Limited to the SPBM implementation of the underlying storage array Prerequisites \u00b6 This guide will cover the configuration and deployment of the vSphere CSI driver. Cloud Native Storage for vSphere uses the VASA provider and Storage Policy Based Management (SPBM) to create First Class Disks on supported arrays. CNS supports VMware vSphere 6.7 U3 and higher. Configuring the VASA provider \u00b6 Refer to the following guides to configure the VASA provider and create a vVol Datastore. Storage Array Guide HPE Alletra 9000 HPE Alletra 9000: VMware ESXi Implementation Guide HPE Primera VMware vVols with HPE Primera Storage HPE Nimble Storage Working with VMware Virtual Volumes HPE Nimble Storage dHCI & HPE Alletra 5000/6000 HPE Nimble Storage dHCI and VMware vSphere New Servers Deployment Guide HPE 3PAR Implementing VMware Virtual Volumes on HPE 3PAR StoreServ Configuring a VM Storage Policy \u00b6 Once the vVol Datastore is created, create a VM Storage Policy. From the vSphere Web Client, click Menu and select Policies and Profiles . Click on VM Storage Policies , and then click Create . Next provide a name for the policy. Click NEXT . Under Datastore specific rules , select either: Enable rules for \"NimbleStorage\" storage Enable rules for \"HPE Primera\" storage Click NEXT . Next click ADD RULE . Choose from the various options available to your array. Below is an example of a VM Storage Policy for Primera. This may vary depending on your requirements and options available within your array. Once complete, click NEXT . Under Storage compatibility, verify the correct vVol datastore is shown as compatible to the options chosen in the previous screen. Click NEXT . Verify everything looks correct and click FINISH . Repeat this process for any additional Storage Policies you may need. Now that we have configured a Storage Policy, we can proceed with the deployment of the vSphere CSI driver. Install the vSphere Cloud Provider Interface (CPI) \u00b6 This is adapted from the following tutorial, please read over to understand all of the vSphere, firewall and guest OS requirements. Deploying a Kubernetes Cluster on vSphere with CSI and CPI Note The following is a simplified single-site configuration to demonstrate how to deploy the vSphere CPI and CSI drivers. Make sure to adapt the configuration to match your environment and needs. Check for ProviderID \u00b6 Check if ProviderID is already configured on your cluster. kubectl get nodes -o jsonpath='{range .items[*]}{.spec.providerID}{\"\\n\"}{end}' If this command returns empty, then proceed with configuring the vSphere Cloud Provider. If the ProviderID is set, then you can proceed directly to installing the vSphere CSI Driver . $ kubectl get nodes -o jsonpath='{range .items[*]}{.spec.providerID}{\"\\n\"}{end}' vsphere://4238c1a1-e72f-74bf-db48-0d9f4da3e9c9 vsphere://4238ede5-50e1-29b6-1337-be8746a5016c vsphere://4238c6dc-3806-ce36-fd14-5eefe830b227 Create a CPI ConfigMap \u00b6 Create a vsphere.conf file. Note The vsphere.conf is a hardcoded filename used by the vSphere Cloud Provider. Do not change it otherwise the Cloud Provider will not deploy correctly. Set the vCenter server FQDN or IP and vSphere datacenter object name to match your environment. Copy and paste the following. # Global properties in this section will be used for all specified vCenters unless overridden in vCenter section. global: port: 443 # Set insecureFlag to true if the vCenter uses a self-signed cert insecureFlag: true # Where to find the Secret used for authentication to vCenter secretName: cpi-global-secret secretNamespace: kube-system # vcenter section vcenter: tenant-k8s: server: datacenters: - Create the ConfigMap from the vsphere.conf file. kubectl create configmap cloud-config --from-file=vsphere.conf -n kube-system Create a CPI Secret \u00b6 The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- < paste the YAML > ^D (CTRL + D) Next create the CPI Secret . apiVersion: v1 kind: Secret metadata: name: cpi-global-secret namespace: kube-system stringData: .username: \"Administrator@vsphere.local\" .password: \"VMware1!\" Note The username and password within the Secret are case-sensitive. Inspect the Secret to verify it was created successfully. kubectl describe secret cpi-global-secret -n kube-system The output is similar to this: Name: cpi-global-secret Namespace: kube-system Labels: Annotations: Type: Opaque Data ==== vcenter.example.com.password: 8 bytes vcenter.example.com.username: 27 bytes Check that all nodes are tainted \u00b6 Before installing vSphere Cloud Controller Manager, make sure all nodes are tainted with node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule . When the kubelet is started with \u201cexternal\u201d cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud provider initializes this node, the kubelet removes this taint. To find your node names, run the following command. kubectl get nodes NAME STATUS ROLES AGE VERSION cp1 Ready control-plane,master 46m v1.20.1 node1 Ready 44m v1.20.1 node2 Ready 44m v1.20.1 To create the taint, run the following command for each node in your cluster. kubectl taint node node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Verify the taint has been applied to each node. kubectl describe nodes | egrep \"Taints:|Name:\" The output is similar to this: Name: cp1 Taints: node-role.kubernetes.io/master:NoSchedule Name: node1 Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Name: node2 Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Deploy the CPI manifests \u00b6 There are 3 manifests that must be deployed to install the vSphere Cloud Provider Interface (CPI). The following example applies the RBAC roles and the RBAC bindings to your Kubernetes cluster. It also deploys the Cloud Controller Manager in a DaemonSet. kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml Verify that the CPI has been successfully deployed \u00b6 Verify vsphere-cloud-controller-manager is running. kubectl rollout status ds/vsphere-cloud-controller-manager -n kube-system daemon set \"vsphere-cloud-controller-manager\" successfully rolled out Note If you happen to make an error with the vsphere.conf , simply delete the CPI components and the ConfigMap , make any necessary edits to the vsphere.conf file, and reapply the steps above. Now that the CPI is installed, we can proceed with deploying the vSphere CSI driver. Install the vSphere Container Storage Interface (CSI) driver \u00b6 The following has been adapted from the vSphere CSI driver installation guide. Refer to the official documentation for additional information on how to deploy the vSphere CSI driver. vSphere CSI driver - Installation Create a configuration file with vSphere credentials \u00b6 Since we are connecting to block storage provided from an HPE Primera, Nimble Storage, Nimble Storage dHCI or 3PAR array, we will create a configuration file for block volumes. Create a csi-vsphere.conf file. Copy and paste the following: [Global] cluster-id = \"csi-vsphere-cluster\" [VirtualCenter \"\"] insecure-flag = \"true\" user = \"Administrator@vsphere.local\" password = \"VMware1!\" port = \"443\" datacenters = \"\" Create a Kubernetes Secret for vSphere credentials \u00b6 Create a Kubernetes Secret that will contain the configuration details to connect to your vSphere environment. kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf -n kube-system Verify that the Secret was created successfully. kubectl get secret vsphere-config-secret -n kube-system NAME TYPE DATA AGE vsphere-config-secret Opaque 1 43s For security purposes, it is advised to remove the csi-vsphere.conf file. Create RBAC, vSphere CSI Controller Deployment and vSphere CSI node DaemonSet \u00b6 Check the official vSphere CSI Driver Github repo for the latest version. vSphere 6.7 U3 kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/deploy/vsphere-csi-controller-deployment.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/deploy/vsphere-csi-node-ds.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/rbac/vsphere-csi-controller-rbac.yaml vSphere 7.0 kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/deploy/vsphere-csi-controller-deployment.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/deploy/vsphere-csi-node-ds.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/rbac/vsphere-csi-controller-rbac.yaml vSphere 7.0U1 kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-controller-deployment.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-node-ds.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/rbac/vsphere-csi-controller-rbac.yaml Verify the vSphere CSI driver deployment \u00b6 Verify that the vSphere CSI driver has been successfully deployed using kubectl rollout status . kubectl rollout status deployment/vsphere-csi-controller -n kube-system deployment \"vsphere-csi-controller\" successfully rolled out kubectl rollout status ds/vsphere-csi-node -n kube-system daemon set \"vsphere-csi-node\" successfully rolled out Verify that the vSphere CSI driver CustomResourceDefinition has been registered with Kubernetes. kubectl describe csidriver/csi.vsphere.vmware.com Name: csi.vsphere.vmware.com Namespace: Labels: Annotations: API Version: storage.k8s.io/v1 Kind: CSIDriver Metadata: Creation Timestamp: 2020-11-21T06:27:23Z Managed Fields: API Version: storage.k8s.io/v1beta1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: f:attachRequired: f:podInfoOnMount: f:volumeLifecycleModes: Manager: kubectl-client-side-apply Operation: Update Time: 2020-11-21T06:27:23Z Resource Version: 217131 Self Link: /apis/storage.k8s.io/v1/csidrivers/csi.vsphere.vmware.com UID: bcda2b5c-3c38-4256-9b91-5ed248395113 Spec: Attach Required: true Pod Info On Mount: false Volume Lifecycle Modes: Persistent Events: Also verify that the vSphere CSINodes CustomResourceDefinition has been created. kubectl get csinodes -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.spec.drivers[].name}{\"\\n\"}{end}' cp1 csi.vsphere.vmware.com node1 csi.vsphere.vmware.com node2 csi.vsphere.vmware.com If there are no errors, the vSphere CSI driver has been successfully deployed. Create a StorageClass \u00b6 With the vSphere CSI driver deployed, lets create a StorageClass that can be used by the CSI driver. Important The following steps will be using the example VM Storage Policy created at the beginning of this guide. If you do not have a Storage Policy available, refer to Configuring a VM Storage Policy before proceeding to the next steps. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: primera-default-sc annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: csi.vsphere.vmware.com parameters: storagepolicyname: \"primera-default-profile\" Validate \u00b6 With the vSphere CSI driver deployed and a StorageClass available, lets run through some tests to verify it is working correctly. In this example, we will be deploying a stateful MongoDB application with 3 replicas. The persistent volumes deployed by the vSphere CSI driver will be created using the VM Storage Policy and placed on a compatible vVol datastore. Create and Deploy a MongoDB Helm chart \u00b6 This is an example MongoDB chart using a StatefulSet. The default volume size is 8Gi , if you want to change that use --set persistence.size=50Gi . helm install mongodb \\ --set architecture=replicaset \\ --set replicaSetName=mongod \\ --set replicaCount=3 \\ --set auth.rootPassword=secretpassword \\ --set auth.username=my-user \\ --set auth.password=my-password \\ --set auth.database=my-database \\ bitnami/mongodb Verify that the MongoDB application has been deployed. Wait for pods to start running and PVCs to be created for each replica. kubectl rollout status sts/mongodb Inspect the Pods and PersistentVolumeClaims . kubectl get pods,pvc NAME READY STATUS RESTARTS AGE mongod-0 1/1 Running 0 90s mongod-1 1/1 Running 0 71s mongod-2 1/1 Running 0 44s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-mongodb-0 Bound pvc-fd3994fb-a5fb-460b-ab17-608a71cdc337 50Gi RWO primera-default-sc 13m datadir-mongodb-1 Bound pvc-a3755dbe-210d-4c7b-8ac1-bb0607a2c537 50Gi RWO primera-default-sc 13m datadir-mongodb-2 Bound pvc-22bab0f4-8240-48c1-91b1-3495d038533e 50Gi RWO primera-default-sc 13m To interact with the Mongo replica set, you can connect to the StatefulSet. kubectl exec -it sts/mongod bash root@mongod-0:/# df -h /bitnami/mongodb Filesystem Size Used Avail Use% Mounted on /dev/sdb 49G 374M 47G 1% /bitnami/mongodb We can see that the vSphere CSI driver has successfully provisioned and mounted the persistent volume to /bitnami/mongodb . Verify Cloud Native Storage in vSphere \u00b6 Verify that the volumes are now visible within the Cloud Native Storage interface by logging into the vSphere Web Client. Click on Datacenter , then the Monitor tab. Expand Cloud Native Storage and highlight Container Volumes . From here, we can see the persistent volumes that were created as part of our MongoDB deployment. These should match the kubectl get pvc output from earlier. You can also monitor their storage policy compliance status. This concludes the validations and verifies that all components of vSphere CNS (vSphere CPI and vSphere CSI drivers) are deployed and working correctly. Support \u00b6 VMware provides enterprise grade support for the vSphere CSI driver. Please use VMware Support Services to file a customer support ticket to engage the VMware global support team. For support information on the HPE CSI Driver for Kubernetes, visit Support . For support with other HPE related technologies, visit the Hewlett Packard Enterprise Support Center .","title":"Deprecated"},{"location":"partners/vmware/legacy.html#deprecated","text":"This deployment guide is deprecated. Learn more here .","title":"Deprecated"},{"location":"partners/vmware/legacy.html#cloud_native_storage_for_vsphere","text":"Cloud Native Storage (CNS) for vSphere exposes vSphere storage and features to Kubernetes users and was introduced in vSphere 6.7 U3. CNS is made up of two parts, a Container Storage Interface (CSI) driver for Kubernetes used to provision storage on vSphere and the CNS Control Plane within vCenter allowing visibility to persistent volumes through the new CNS UI within vCenter. CNS fully supports Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Profiles. This storage profile can have multiple array capabilities and data services, depending on the underlying storage you use. HPE primary storage (HPE Primera, Nimble Storage, Nimble Storage dHCI, and 3PAR) has the largest user base of vVols in the market, due to its simplicity to deploy and ease of use. Tip Check out the tutorial available on YouTube in the Video Gallery on how to configure and use HPE storage with Cloud Native Storage for vSphere. Watch the video in its entirety or skip to configuring Tanzu with HPE storage or configuring the vSphere CSI Driver with HPE storage . Deprecated Cloud Native Storage for vSphere Feature Comparison Deployment Prerequisites Configuring the VASA provider Configuring a VM Storage Policy Install the vSphere Cloud Provider Interface (CPI) Check for ProviderID Create a CPI ConfigMap Create a CPI Secret Check that all nodes are tainted Deploy the CPI manifests Verify that the CPI has been successfully deployed Install the vSphere Container Storage Interface (CSI) driver Create a configuration file with vSphere credentials Create a Kubernetes Secret for vSphere credentials Create RBAC, vSphere CSI Controller Deployment and vSphere CSI node DaemonSet Verify the vSphere CSI driver deployment Create a StorageClass Validate Create and Deploy a MongoDB Helm chart Verify Cloud Native Storage in vSphere Support","title":"Cloud Native Storage for vSphere"},{"location":"partners/vmware/legacy.html#feature_comparison","text":"Volume parameters available to the vSphere CSI Driver will be dependent upon options exposed through the vSphere SPBM and may not include all volume features available. Please refer to the HPE Primera: VMware ESXi Implementation Guide or VMware vSphere Virtual Volumes on HPE Nimble Storage Implementation Guide for list of available features. For a list of available volume parameters in the HPE CSI Driver for Kubernetes, refer to the respective CSP . Feature HPE CSI Driver vSphere CSI Driver vCenter Cloud Native Storage (CNS) UI Support No GA Dynamic Block PV Provisioning (ReadWriteOnce access mode) GA GA (vVOL) Dynamic File Provisioning (ReadWriteMany access mode) GA GA (vSan Only) Volume Snapshots (CSI) GA Alpha (2.4.0) Volume Cloning from VolumeSnapshot (CSI) GA No Volume Cloning from PVC (CSI) GA No Volume Expansion (CSI) GA GA (offline only) Raw Block Volume (CSI) GA Alpha Generic Ephemeral Volumes (CSI) GA GA Inline Ephemeral Volumes (CSI) GA No Topology (CSI) No GA Volume Health (CSI) No GA (vSan only) CSI Controller multiple replica support No GA Volume Encryption GA GA (via VMcrypt) Volume Mutator 1 GA No Volume Groups 1 GA No Snapshot Groups 1 GA No Peer Persistence Replication 3 GA No 4 1 = Feature comparison based upon HPE CSI Driver for Kubernetes v2.1.1 and the vSphere CSI Driver v2.4.1 2 = HPE and VMware fully support features listed as GA for their respective CSI drivers. 3 = The HPE Remote Copy Peer Persistence feature of the HPE CSI Driver for Kubernetes is only available with HPE Alletra 9000 and Primera storage systems. 4 = Peer Persistence is an HPE Storage specific platform feature that isn't abstracted up the vSphere CSI Driver. Peer Persistence works with the vSphere CSI Driver when using VMFS datastores. Please refer to vSphere CSI Driver - Supported Features Matrix for the most up-to-date information.","title":"Feature Comparison"},{"location":"partners/vmware/legacy.html#deployment","text":"When considering to use block storage within Kubernetes clusters running on VMware, customers need to evaluate which data protocol (FC or iSCSI) is primarily used within their virtualized environment. This will help best determine which CSI driver can be deployed within your Kubernetes clusters. Important Due to limitations when exposing physical hardware (i.e. Fibre Channel Host Bus Adapters) to virtualized guest OSs and if iSCSI is not an available, HPE recommends the use of the VMware vSphere CSI driver to deliver block-based persistent storage from HPE Primera, Nimble Storage, Nimble Storage dHCI or 3PAR arrays to Kubernetes clusters within VMware environments for customers who are using the Fibre Channel protocol. The HPE CSI Driver for Kubernetes does not support N_Port ID Virtualization (NPIV). Protocol HPE CSI Driver for Kubernetes vSphere CSI driver FC Not supported Supported* iSCSI Supported Supported* * = Limited to the SPBM implementation of the underlying storage array","title":"Deployment"},{"location":"partners/vmware/legacy.html#prerequisites","text":"This guide will cover the configuration and deployment of the vSphere CSI driver. Cloud Native Storage for vSphere uses the VASA provider and Storage Policy Based Management (SPBM) to create First Class Disks on supported arrays. CNS supports VMware vSphere 6.7 U3 and higher.","title":"Prerequisites"},{"location":"partners/vmware/legacy.html#configuring_the_vasa_provider","text":"Refer to the following guides to configure the VASA provider and create a vVol Datastore. Storage Array Guide HPE Alletra 9000 HPE Alletra 9000: VMware ESXi Implementation Guide HPE Primera VMware vVols with HPE Primera Storage HPE Nimble Storage Working with VMware Virtual Volumes HPE Nimble Storage dHCI & HPE Alletra 5000/6000 HPE Nimble Storage dHCI and VMware vSphere New Servers Deployment Guide HPE 3PAR Implementing VMware Virtual Volumes on HPE 3PAR StoreServ","title":"Configuring the VASA provider"},{"location":"partners/vmware/legacy.html#configuring_a_vm_storage_policy","text":"Once the vVol Datastore is created, create a VM Storage Policy. From the vSphere Web Client, click Menu and select Policies and Profiles . Click on VM Storage Policies , and then click Create . Next provide a name for the policy. Click NEXT . Under Datastore specific rules , select either: Enable rules for \"NimbleStorage\" storage Enable rules for \"HPE Primera\" storage Click NEXT . Next click ADD RULE . Choose from the various options available to your array. Below is an example of a VM Storage Policy for Primera. This may vary depending on your requirements and options available within your array. Once complete, click NEXT . Under Storage compatibility, verify the correct vVol datastore is shown as compatible to the options chosen in the previous screen. Click NEXT . Verify everything looks correct and click FINISH . Repeat this process for any additional Storage Policies you may need. Now that we have configured a Storage Policy, we can proceed with the deployment of the vSphere CSI driver.","title":"Configuring a VM Storage Policy"},{"location":"partners/vmware/legacy.html#install_the_vsphere_cloud_provider_interface_cpi","text":"This is adapted from the following tutorial, please read over to understand all of the vSphere, firewall and guest OS requirements. Deploying a Kubernetes Cluster on vSphere with CSI and CPI Note The following is a simplified single-site configuration to demonstrate how to deploy the vSphere CPI and CSI drivers. Make sure to adapt the configuration to match your environment and needs.","title":"Install the vSphere Cloud Provider Interface (CPI)"},{"location":"partners/vmware/legacy.html#check_for_providerid","text":"Check if ProviderID is already configured on your cluster. kubectl get nodes -o jsonpath='{range .items[*]}{.spec.providerID}{\"\\n\"}{end}' If this command returns empty, then proceed with configuring the vSphere Cloud Provider. If the ProviderID is set, then you can proceed directly to installing the vSphere CSI Driver . $ kubectl get nodes -o jsonpath='{range .items[*]}{.spec.providerID}{\"\\n\"}{end}' vsphere://4238c1a1-e72f-74bf-db48-0d9f4da3e9c9 vsphere://4238ede5-50e1-29b6-1337-be8746a5016c vsphere://4238c6dc-3806-ce36-fd14-5eefe830b227","title":"Check for ProviderID"},{"location":"partners/vmware/legacy.html#create_a_cpi_configmap","text":"Create a vsphere.conf file. Note The vsphere.conf is a hardcoded filename used by the vSphere Cloud Provider. Do not change it otherwise the Cloud Provider will not deploy correctly. Set the vCenter server FQDN or IP and vSphere datacenter object name to match your environment. Copy and paste the following. # Global properties in this section will be used for all specified vCenters unless overridden in vCenter section. global: port: 443 # Set insecureFlag to true if the vCenter uses a self-signed cert insecureFlag: true # Where to find the Secret used for authentication to vCenter secretName: cpi-global-secret secretNamespace: kube-system # vcenter section vcenter: tenant-k8s: server: datacenters: - Create the ConfigMap from the vsphere.conf file. kubectl create configmap cloud-config --from-file=vsphere.conf -n kube-system","title":"Create a CPI ConfigMap"},{"location":"partners/vmware/legacy.html#create_a_cpi_secret","text":"The below YAML declarations are meant to be created with kubectl create . Either copy the content to a file on the host where kubectl is being executed, or copy & paste into the terminal, like this: kubectl create -f- < paste the YAML > ^D (CTRL + D) Next create the CPI Secret . apiVersion: v1 kind: Secret metadata: name: cpi-global-secret namespace: kube-system stringData: .username: \"Administrator@vsphere.local\" .password: \"VMware1!\" Note The username and password within the Secret are case-sensitive. Inspect the Secret to verify it was created successfully. kubectl describe secret cpi-global-secret -n kube-system The output is similar to this: Name: cpi-global-secret Namespace: kube-system Labels: Annotations: Type: Opaque Data ==== vcenter.example.com.password: 8 bytes vcenter.example.com.username: 27 bytes","title":"Create a CPI Secret"},{"location":"partners/vmware/legacy.html#check_that_all_nodes_are_tainted","text":"Before installing vSphere Cloud Controller Manager, make sure all nodes are tainted with node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule . When the kubelet is started with \u201cexternal\u201d cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud provider initializes this node, the kubelet removes this taint. To find your node names, run the following command. kubectl get nodes NAME STATUS ROLES AGE VERSION cp1 Ready control-plane,master 46m v1.20.1 node1 Ready 44m v1.20.1 node2 Ready 44m v1.20.1 To create the taint, run the following command for each node in your cluster. kubectl taint node node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Verify the taint has been applied to each node. kubectl describe nodes | egrep \"Taints:|Name:\" The output is similar to this: Name: cp1 Taints: node-role.kubernetes.io/master:NoSchedule Name: node1 Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Name: node2 Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule","title":"Check that all nodes are tainted"},{"location":"partners/vmware/legacy.html#deploy_the_cpi_manifests","text":"There are 3 manifests that must be deployed to install the vSphere Cloud Provider Interface (CPI). The following example applies the RBAC roles and the RBAC bindings to your Kubernetes cluster. It also deploys the Cloud Controller Manager in a DaemonSet. kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml","title":"Deploy the CPI manifests"},{"location":"partners/vmware/legacy.html#verify_that_the_cpi_has_been_successfully_deployed","text":"Verify vsphere-cloud-controller-manager is running. kubectl rollout status ds/vsphere-cloud-controller-manager -n kube-system daemon set \"vsphere-cloud-controller-manager\" successfully rolled out Note If you happen to make an error with the vsphere.conf , simply delete the CPI components and the ConfigMap , make any necessary edits to the vsphere.conf file, and reapply the steps above. Now that the CPI is installed, we can proceed with deploying the vSphere CSI driver.","title":"Verify that the CPI has been successfully deployed"},{"location":"partners/vmware/legacy.html#install_the_vsphere_container_storage_interface_csi_driver","text":"The following has been adapted from the vSphere CSI driver installation guide. Refer to the official documentation for additional information on how to deploy the vSphere CSI driver. vSphere CSI driver - Installation","title":"Install the vSphere Container Storage Interface (CSI) driver"},{"location":"partners/vmware/legacy.html#create_a_configuration_file_with_vsphere_credentials","text":"Since we are connecting to block storage provided from an HPE Primera, Nimble Storage, Nimble Storage dHCI or 3PAR array, we will create a configuration file for block volumes. Create a csi-vsphere.conf file. Copy and paste the following: [Global] cluster-id = \"csi-vsphere-cluster\" [VirtualCenter \"\"] insecure-flag = \"true\" user = \"Administrator@vsphere.local\" password = \"VMware1!\" port = \"443\" datacenters = \"\"","title":"Create a configuration file with vSphere credentials"},{"location":"partners/vmware/legacy.html#create_a_kubernetes_secret_for_vsphere_credentials","text":"Create a Kubernetes Secret that will contain the configuration details to connect to your vSphere environment. kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf -n kube-system Verify that the Secret was created successfully. kubectl get secret vsphere-config-secret -n kube-system NAME TYPE DATA AGE vsphere-config-secret Opaque 1 43s For security purposes, it is advised to remove the csi-vsphere.conf file.","title":"Create a Kubernetes Secret for vSphere credentials"},{"location":"partners/vmware/legacy.html#create_rbac_vsphere_csi_controller_deployment_and_vsphere_csi_node_daemonset","text":"Check the official vSphere CSI Driver Github repo for the latest version. vSphere 6.7 U3 kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/deploy/vsphere-csi-controller-deployment.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/deploy/vsphere-csi-node-ds.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-67u3/rbac/vsphere-csi-controller-rbac.yaml vSphere 7.0 kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/deploy/vsphere-csi-controller-deployment.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/deploy/vsphere-csi-node-ds.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0/rbac/vsphere-csi-controller-rbac.yaml vSphere 7.0U1 kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-controller-deployment.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/deploy/vsphere-csi-node-ds.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/v2.1.0/vsphere-7.0u1/rbac/vsphere-csi-controller-rbac.yaml","title":"Create RBAC, vSphere CSI Controller Deployment and vSphere CSI node DaemonSet"},{"location":"partners/vmware/legacy.html#verify_the_vsphere_csi_driver_deployment","text":"Verify that the vSphere CSI driver has been successfully deployed using kubectl rollout status . kubectl rollout status deployment/vsphere-csi-controller -n kube-system deployment \"vsphere-csi-controller\" successfully rolled out kubectl rollout status ds/vsphere-csi-node -n kube-system daemon set \"vsphere-csi-node\" successfully rolled out Verify that the vSphere CSI driver CustomResourceDefinition has been registered with Kubernetes. kubectl describe csidriver/csi.vsphere.vmware.com Name: csi.vsphere.vmware.com Namespace: Labels: Annotations: API Version: storage.k8s.io/v1 Kind: CSIDriver Metadata: Creation Timestamp: 2020-11-21T06:27:23Z Managed Fields: API Version: storage.k8s.io/v1beta1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: f:attachRequired: f:podInfoOnMount: f:volumeLifecycleModes: Manager: kubectl-client-side-apply Operation: Update Time: 2020-11-21T06:27:23Z Resource Version: 217131 Self Link: /apis/storage.k8s.io/v1/csidrivers/csi.vsphere.vmware.com UID: bcda2b5c-3c38-4256-9b91-5ed248395113 Spec: Attach Required: true Pod Info On Mount: false Volume Lifecycle Modes: Persistent Events: Also verify that the vSphere CSINodes CustomResourceDefinition has been created. kubectl get csinodes -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.spec.drivers[].name}{\"\\n\"}{end}' cp1 csi.vsphere.vmware.com node1 csi.vsphere.vmware.com node2 csi.vsphere.vmware.com If there are no errors, the vSphere CSI driver has been successfully deployed.","title":"Verify the vSphere CSI driver deployment"},{"location":"partners/vmware/legacy.html#create_a_storageclass","text":"With the vSphere CSI driver deployed, lets create a StorageClass that can be used by the CSI driver. Important The following steps will be using the example VM Storage Policy created at the beginning of this guide. If you do not have a Storage Policy available, refer to Configuring a VM Storage Policy before proceeding to the next steps. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: primera-default-sc annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: csi.vsphere.vmware.com parameters: storagepolicyname: \"primera-default-profile\"","title":"Create a StorageClass"},{"location":"partners/vmware/legacy.html#validate","text":"With the vSphere CSI driver deployed and a StorageClass available, lets run through some tests to verify it is working correctly. In this example, we will be deploying a stateful MongoDB application with 3 replicas. The persistent volumes deployed by the vSphere CSI driver will be created using the VM Storage Policy and placed on a compatible vVol datastore.","title":"Validate"},{"location":"partners/vmware/legacy.html#create_and_deploy_a_mongodb_helm_chart","text":"This is an example MongoDB chart using a StatefulSet. The default volume size is 8Gi , if you want to change that use --set persistence.size=50Gi . helm install mongodb \\ --set architecture=replicaset \\ --set replicaSetName=mongod \\ --set replicaCount=3 \\ --set auth.rootPassword=secretpassword \\ --set auth.username=my-user \\ --set auth.password=my-password \\ --set auth.database=my-database \\ bitnami/mongodb Verify that the MongoDB application has been deployed. Wait for pods to start running and PVCs to be created for each replica. kubectl rollout status sts/mongodb Inspect the Pods and PersistentVolumeClaims . kubectl get pods,pvc NAME READY STATUS RESTARTS AGE mongod-0 1/1 Running 0 90s mongod-1 1/1 Running 0 71s mongod-2 1/1 Running 0 44s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-mongodb-0 Bound pvc-fd3994fb-a5fb-460b-ab17-608a71cdc337 50Gi RWO primera-default-sc 13m datadir-mongodb-1 Bound pvc-a3755dbe-210d-4c7b-8ac1-bb0607a2c537 50Gi RWO primera-default-sc 13m datadir-mongodb-2 Bound pvc-22bab0f4-8240-48c1-91b1-3495d038533e 50Gi RWO primera-default-sc 13m To interact with the Mongo replica set, you can connect to the StatefulSet. kubectl exec -it sts/mongod bash root@mongod-0:/# df -h /bitnami/mongodb Filesystem Size Used Avail Use% Mounted on /dev/sdb 49G 374M 47G 1% /bitnami/mongodb We can see that the vSphere CSI driver has successfully provisioned and mounted the persistent volume to /bitnami/mongodb .","title":"Create and Deploy a MongoDB Helm chart"},{"location":"partners/vmware/legacy.html#verify_cloud_native_storage_in_vsphere","text":"Verify that the volumes are now visible within the Cloud Native Storage interface by logging into the vSphere Web Client. Click on Datacenter , then the Monitor tab. Expand Cloud Native Storage and highlight Container Volumes . From here, we can see the persistent volumes that were created as part of our MongoDB deployment. These should match the kubectl get pvc output from earlier. You can also monitor their storage policy compliance status. This concludes the validations and verifies that all components of vSphere CNS (vSphere CPI and vSphere CSI drivers) are deployed and working correctly.","title":"Verify Cloud Native Storage in vSphere"},{"location":"partners/vmware/legacy.html#support","text":"VMware provides enterprise grade support for the vSphere CSI driver. Please use VMware Support Services to file a customer support ticket to engage the VMware global support team. For support information on the HPE CSI Driver for Kubernetes, visit Support . For support with other HPE related technologies, visit the Hewlett Packard Enterprise Support Center .","title":"Support"},{"location":"welcome/index.html","text":"Choose your platform \u00b6 HPE provides a broad portfolio of products that integrate with Kubernetes and neighboring ecosystems. The following table provides an overview of integrations available for each primary storage platform. Ecosystem HPE Alletra 5000/6000 and Nimble HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Kubernetes HPE CSI Driver with Alletra 6000 CSP HPE CSI Driver with Alletra Storage MP CSP Looking to deploy the CSI driver ? Help me choose \u00b6 Interested in acquiring a persistent storage solution for your Kubernetes project? Criteria HPE Alletra 5000/6000 HPE Alletra Storage MP Availability 99.9999% 100% Workloads Business-critical Mission-critical Learn more hpe.com/storage/alletra hpe.com/storage/greenlake Other HPE storage platforms \u00b6 Can't find what you're looking for? Check out hpe.com/storage for additional HPE storage platforms.","title":"Get started!"},{"location":"welcome/index.html#choose_your_platform","text":"HPE provides a broad portfolio of products that integrate with Kubernetes and neighboring ecosystems. The following table provides an overview of integrations available for each primary storage platform. Ecosystem HPE Alletra 5000/6000 and Nimble HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR Kubernetes HPE CSI Driver with Alletra 6000 CSP HPE CSI Driver with Alletra Storage MP CSP Looking to deploy the CSI driver ?","title":"Choose your platform"},{"location":"welcome/index.html#help_me_choose","text":"Interested in acquiring a persistent storage solution for your Kubernetes project? Criteria HPE Alletra 5000/6000 HPE Alletra Storage MP Availability 99.9999% 100% Workloads Business-critical Mission-critical Learn more hpe.com/storage/alletra hpe.com/storage/greenlake","title":"Help me choose"},{"location":"welcome/index.html#other_hpe_storage_platforms","text":"Can't find what you're looking for? Check out hpe.com/storage for additional HPE storage platforms.","title":"Other HPE storage platforms"}]} \ No newline at end of file diff --git a/search/worker.js b/search/worker.js new file mode 100644 index 00000000..8628dbce --- /dev/null +++ b/search/worker.js @@ -0,0 +1,133 @@ +var base_path = 'function' === typeof importScripts ? '.' : '/search/'; +var allowSearch = false; +var index; +var documents = {}; +var lang = ['en']; +var data; + +function getScript(script, callback) { + console.log('Loading script: ' + script); + $.getScript(base_path + script).done(function () { + callback(); + }).fail(function (jqxhr, settings, exception) { + console.log('Error: ' + exception); + }); +} + +function getScriptsInOrder(scripts, callback) { + if (scripts.length === 0) { + callback(); + return; + } + getScript(scripts[0], function() { + getScriptsInOrder(scripts.slice(1), callback); + }); +} + +function loadScripts(urls, callback) { + if( 'function' === typeof importScripts ) { + importScripts.apply(null, urls); + callback(); + } else { + getScriptsInOrder(urls, callback); + } +} + +function onJSONLoaded () { + data = JSON.parse(this.responseText); + var scriptsToLoad = ['lunr.js']; + if (data.config && data.config.lang && data.config.lang.length) { + lang = data.config.lang; + } + if (lang.length > 1 || lang[0] !== "en") { + scriptsToLoad.push('lunr.stemmer.support.js'); + if (lang.length > 1) { + scriptsToLoad.push('lunr.multi.js'); + } + if (lang.includes("ja") || lang.includes("jp")) { + scriptsToLoad.push('tinyseg.js'); + } + for (var i=0; i < lang.length; i++) { + if (lang[i] != 'en') { + scriptsToLoad.push(['lunr', lang[i], 'js'].join('.')); + } + } + } + loadScripts(scriptsToLoad, onScriptsLoaded); +} + +function onScriptsLoaded () { + console.log('All search scripts loaded, building Lunr index...'); + if (data.config && data.config.separator && data.config.separator.length) { + lunr.tokenizer.separator = new RegExp(data.config.separator); + } + + if (data.index) { + index = lunr.Index.load(data.index); + data.docs.forEach(function (doc) { + documents[doc.location] = doc; + }); + console.log('Lunr pre-built index loaded, search ready'); + } else { + index = lunr(function () { + if (lang.length === 1 && lang[0] !== "en" && lunr[lang[0]]) { + this.use(lunr[lang[0]]); + } else if (lang.length > 1) { + this.use(lunr.multiLanguage.apply(null, lang)); // spread operator not supported in all browsers: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator#Browser_compatibility + } + this.field('title'); + this.field('text'); + this.ref('location'); + + for (var i=0; i < data.docs.length; i++) { + var doc = data.docs[i]; + this.add(doc); + documents[doc.location] = doc; + } + }); + console.log('Lunr index built, search ready'); + } + allowSearch = true; + postMessage({config: data.config}); + postMessage({allowSearch: allowSearch}); +} + +function init () { + var oReq = new XMLHttpRequest(); + oReq.addEventListener("load", onJSONLoaded); + var index_path = base_path + '/search_index.json'; + if( 'function' === typeof importScripts ){ + index_path = 'search_index.json'; + } + oReq.open("GET", index_path); + oReq.send(); +} + +function search (query) { + if (!allowSearch) { + console.error('Assets for search still loading'); + return; + } + + var resultDocuments = []; + var results = index.search(query); + for (var i=0; i < results.length; i++){ + var result = results[i]; + doc = documents[result.ref]; + doc.summary = doc.text.substring(0, 200); + resultDocuments.push(doc); + } + return resultDocuments; +} + +if( 'function' === typeof importScripts ) { + onmessage = function (e) { + if (e.data.init) { + init(); + } else if (e.data.query) { + postMessage({ results: search(e.data.query) }); + } else { + console.error("Worker - Unrecognized message: " + e); + } + }; +} diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..cb930e1a --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,238 @@ + + + + https://scod.hpedev.io/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/container_storage_provider/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/container_storage_provider/hpe_alletra_6000/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/container_storage_provider/hpe_alletra_storage_mp/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/container_storage_provider/hpe_cloud_volumes/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/archive.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/deployment.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/diagnostics.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/install_legacy.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/metrics.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/monitor.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/operations.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/standalone_nfs.html + 2024-08-14 + daily + + + https://scod.hpedev.io/csi_driver/using.html + 2024-08-14 + daily + + + https://scod.hpedev.io/docker_volume_plugins/hpe_cloud_volumes/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/docker_volume_plugins/hpe_nimble_storage/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/ezmeral/install.html + 2024-08-14 + daily + + + https://scod.hpedev.io/flexvolume_driver/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/flexvolume_driver/container_provider/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/flexvolume_driver/dory/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/flexvolume_driver/hpe_3par_primera_installer/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/learn/containers101/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/learn/csi_primitives/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/learn/csi_workshop/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/learn/introduction_to_containers/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/learn/persistent_storage/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/learn/video_gallery/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/legacy/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/legal/contributing/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/legal/license/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/legal/notices/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/legal/support/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/amazon_eks_anywhere/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/canonical/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/cohesity/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/commvault/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/kasten/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/mirantis/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/redhat_openshift/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/suse_harvester/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/suse_rancher/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/tkgi/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/vmware/index.html + 2024-08-14 + daily + + + https://scod.hpedev.io/partners/vmware/legacy.html + 2024-08-14 + daily + + + https://scod.hpedev.io/welcome/index.html + 2024-08-14 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..e295bd4e Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/welcome/img/alletra6000.png b/welcome/img/alletra6000.png new file mode 100644 index 00000000..e3a650e2 Binary files /dev/null and b/welcome/img/alletra6000.png differ diff --git a/welcome/img/alletra9000.png b/welcome/img/alletra9000.png new file mode 100644 index 00000000..0265775c Binary files /dev/null and b/welcome/img/alletra9000.png differ diff --git a/welcome/img/alletramp.png b/welcome/img/alletramp.png new file mode 100644 index 00000000..3c4ec39c Binary files /dev/null and b/welcome/img/alletramp.png differ diff --git a/welcome/index.html b/welcome/index.html new file mode 100644 index 00000000..a5f7b671 --- /dev/null +++ b/welcome/index.html @@ -0,0 +1,296 @@ + + + + + + + + + + + + + + + + + + Get started! - SCOD.HPEDEV.IO + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • WELCOME »
  • +
  • Get started!
  • +
  • +
  • +
+
+
+
+
+ +

Choose your platform

+

HPE provides a broad portfolio of products that integrate with Kubernetes and neighboring ecosystems. The following table provides an overview of integrations available for each primary storage platform.

+ + + + + + + + + + + + + + + +
Ecosystem

HPE Alletra 5000/6000 and Nimble


HPE Alletra Storage MP, Alletra 9000, Primera and 3PAR
KubernetesHPE CSI Driver with Alletra 6000 CSPHPE CSI Driver with Alletra Storage MP CSP
+ +

Help me choose

+

Interested in acquiring a persistent storage solution for your Kubernetes project?

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Criteria

HPE Alletra 5000/6000


HPE Alletra Storage MP
Availability99.9999%100%
WorkloadsBusiness-criticalMission-critical
Learn morehpe.com/storage/alletrahpe.com/storage/greenlake
+

Other HPE storage platforms

+

Can't find what you're looking for? Check out hpe.com/storage for additional HPE storage platforms.

+ +
+
+ +
+ + + + +
+ +
+ + +

Copyright 2020-2024 Hewlett Packard Enterprise Development LP
Give feedback on this page.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + Next » + + +
+ + + + + + + +