Gitlab CI/CD with CSI PowerMax

By on ・ Intégration d'un processus CI/CD Gitlab avec CSI PowerMax ・

TL; DR

Watch the basic deployment & snapshot-based deployment videos on Youtube and check the .gitlab-ci-cd.yaml on Gitlab.

The premise

For the first release of the CSI Driver for PowerMax we wanted to show how the PV dynamic provisioning and snapshot capabilities.

To present a realistic scenario, we used Gitlab CI/CD, its Kubernetes runner, and the CSI Driver off course.

The application itself is a fork of the VueJS example app TODO, which we modified to use Sinatra as an API provider and SQLite to store the TODOs.

The implementation

The concept is:

  • the master branch corresponds to the latest image and is the production environment
  • anytime we push a new branch to GitLab we:
    • build the image
    • take a snapshot of PV from production
    • create an environment to access the new app
  • new commits on the branch will keep using their own environment with an independent PV
  • on branch merge:
    • the dedicated environment and related PV are deleted
    • the production is redeployed with the latest image

Most of the magic on the storage layer happens in the PVC, Snap definitions, and with the Helm variables.

we can see that, if the branch is the latest, we deploy a dedicated volume (i.e. only the first time). For every other branches we start from a snapshot taken at the time of the first branch creation.

Under the scene, we will have two independent volumes in PowerMax. For more deep-dive on PowerMax SnapVX (i.e. PowerMax local replicas) you can check that white paper.

To avoid, the storage box to be bloated by the project, we also defined a resource Quota on the namespace.


Limitations

The current version of the CSI driver (v1.2) the snapshot API is v1alpha1 and not compatible with Kubernetes v1.17 and beyond.

A snapshot is only accessible from the same namespace and cannot restore a volume on a different namespace.

Other tips

One of the tricks is is to put the Gitlab variable CI_COMMIT_SHORT_SHA in the helm template ; that way, we make sure it is re-proceed and therefore redeployed with the latest build by Helm.

Finally, to save some time in building the images, I used local gems.

Videos

For a live demo, check the videos here: