


Persistentvolumeclaim/data-my-wordpress-mariadb-primary-0 Once deployed, there should be three PersistentVolumeClaims on the cluster.
#BITNAMI WORDPRESS STACK HANGING INSTALL#
Install the WordPress chart: helm install my-wordpress bitnami/wordpress -f wp-values.yaml We're using the following "values" file for the deployment: mariadb : architecture : replicationĪdd the Bitnami repo: helm repo add bitnami In this example we'll use WordPress from the bitnami/wordpress Helm Chart. In order to illustrate the fact that multiple snapshots are being created, either pick an application that requires multiple volumes or deploy a microservice stack comprised of multiple stateful applications. Any Container Storage Provider (CSP) will work that supports VolumeGroups and SnapshotGroups. In the examples below, we're using HPE Nimble Storage. No particular parameters are needed in either the VolumeSnapshotClass or StorageClass but the backend Secret is assumed to be named "hpe-backend" and reside in the "hpe-storage" Namespace. Create a VolumeSnapshotClass and of course a StorageClass.
#BITNAMI WORDPRESS STACK HANGING DRIVER#

The examples we're going to walk through require that the HPE CSI Driver for Kubernetes v1.4.0 or later has been installed along with the CSI external snapshotter. Just don’t forget to come back to read the "Learn more" section at the end of this article for important information. If you prefer watching and listening instead of reading, please go ahead and watch the screencast. TL DRĪ variant of the demonstrative steps below has been captured in a screencast that is available on YouTube. This capability was introduced in the HPE CSI Driver for Kubernetes v1.4.0 and more information about the release may be found on Around The Storage Block. It is the industry standard to create volume snapshots with referential integrity. In other storage infrastructure management systems, the term "Volume Groups" is usually referred to as "Consistency Groups". In this blog post, we'll use the HPE CSI Driver for Kubernetes to create Volume Groups that allow users to group Persistent Volume Claims together and use those Volume Groups to perform CSI Volume Snapshots through Snapshot Groups. But what if you wanted an atomic operation where all Persistent Volume Claims that make up an application in a microservice architecture need to be atomically protected to ensure referential integrity? Would you stop the application, sequence the operation or take a shotgun approach and hope for the best? The actual physical location doesn't really matter. Typically, Persistent Volume Claims on Kubernetes are treated as a singular entity completely decoupled from your workload.
