Kubernetes and Persistent Volumes on GIG’s Cloud as a Service: how to Get Started

You are here:
  • Main
  • Kubernetes
  • Kubernetes and Persistent Volumes on GIG's Cloud as a Service: how to Get Started
< All Topics


The tutorial targets GIG's Cloud-as-a-Service (GIG SaaC) users, willing to deploy applications on a redundant Kubernetes cluster.

Kubernetes is a cluster orchestration system widely used to deploy and manage containerized applications at scale. Kubernetes ensures high availability provided through container replication and service redundancy.
It means if one of the compute nodes of the cluster fails, Kubernetes's self-healing guaranties that the pods deployed on the node can escape the faulty node and be redeployed on one of the healthy nodes.

While containerized applications are typically stateless and pods running them can be easily redeployed, moved or multiplied, actual data should be stored in a persistent back-end to be accessible for the pods across the cluster. To add a persistent back-end to the Kubernetes cluster, you can define a storage object - Persistent Volume (PV). Any pod can use Persistent Volume Claim (PVC) to obtain access to the PV.
GIG Edge Cloud uses G8 CSI driver to dynamically provision PV on the cluster.
G8 CSI driver is a plugin used to employ an G8 data disk to act as a persistent storage of a Kubernetes cluster deployed on top of GIG SaaC.

This tutorial shows an example on how to set up a Kubernetes cluster with a persistent volume using the G8 CSI driver on top of GIG SaaC using Terraform and Ansible.


  • Ansible v2.7.15
  • Terraform v0.11.14
    Terraform providers:

  • Kubernetes v1.15.0
  • Optional: Helm to install packages on the Kubernetes cluster.
  • Make sure you have installed python and python's netaddr.
  • Account. Authentication to a G8 (an infrastructure platform unit of the GIG SaaC) is done with identity server. In order to gain access you should have an account. To interact with a G8 you need to generate a valid JWT token, which can be derived from your Application ID (or client ID) and Secret pair. To generate these credentials on the page of your account create a new API key. In order to generate a token use the following curl request:

    export TF_VAR_client_jwt=$(curl --silent -d 'grant_type=client_credentials&client_id='"$CLIENT_ID"'&client_secret='"$CLIENT_SECRET"'&response_type=id_token&scope=offline_access'


The design used for the Kubernetes cluster deployed in a dedicated Cloudapace on GIG SaaC is shown below.


  • GIG's SaaC API is used by Terraform to deploy a cloudspace with nodes/configurable
  • Nodes of the cluster are deployed on a dedicated cloudspace. All nodes in the cloudspace have IP addresses on the internal network managed by the Virtual Firewall(VFW) of the cloudspace. The VFW has a routable IP address and it is responsible to provide the Internet access and port forwarding for the machines on its private network.
  • Port 22 of the management node is forwarded to 2222 of the VFW to use it as an SSH Bastion Host for Ansible. After reaching the bastion host Ansible will access other nodes via double ssh tunneling. Ansible playbooks are responsible to install the Kuberneres cluster itself and CSI driver on the cluster nodes.
  • Port 6443 (standard port of Kubernetes API server) of the master node is forwarded to 6334 of VFW to enable access to the Kubernetes cluster.
  • Physical storage instance is an G8 data disk attached to one of the workers. The persistent storage on the cluster is managed by the G8 CSI driver. The CSI driver consists of several pods responsible for creating, provisioning and attaching the G8 disk to the correct host. Persistent storage is created on the cluster by means of a Storage class, configured to use the G8 CSI driver as a back-end for the PVC. If the worker node hosting the persistent disk is removed from the cluster, CSI driver pods will be redeployed and the persistent disk will be reattached to one of the available workers.
  • Application runs in a Kubernetes pod and can consist or several containers sharing access to the storage instance. To enable read/write operations to the storage, mount the persistent volume claim (PVC) to your app.


Basic configuration of the setup can be provided by setting environment variables.

## Terraform vars
# server_url
export TF_VAR_server_url="<G8's url>"

# client_jwt
export TF_VAR_client_jwt="<Your JWT token>"

# account
export TF_VAR_account="<Your OVC Account>"

# cloudspace name
export TF_VAR_cloudspace="<New cloudspace name>"

# cluster nodes
export TF_VAR_master_count=1
export TF_VAR_worker_count=3

# node resources
export TF_VAR_disksize=10
export TF_VAR_memory=2048
export TF_VAR_vcpus=2

# ssh key to load on all nodes
export TF_VAR_ssh_key="<Your SSH KEY>"

## Dynamic inventory config
export ANSIBLE_TF_DIR=terraform

## Path where Kubernetes cluster config file will be saved
export KUBECONFIG=~/.kube/ovc_k8s_config

# cluster name
export cluster_name="<Name of your cluster>"

For convenience place the exports in a file and execute


Step 1 - Deploy Nodes with Terraform

Choose your project directory, for example ~/cluster-demo/.
In your project directory create subdirectory for the Terraform configuration ~/cluster-demo/terraform, where you place your Terraform files:,, terraform.tfvars.
See below an example of the Terraform configuration for a Kubernetes cluster.
variable "client_jwt" {
  description = "Client jwt created on with client id and secret"
variable "server_url" {
  description = "API server URL"
variable "account" {
  description = "Account name"
variable "cloudspace" {
  description = "Cloudspace name"
  default = "demo"
variable "vm_description" {
  description = "Description of the VM"
  default = "kubernetes cluster"
variable "memory" {
  description = "Machine memory"
  default     = "2048"
variable "vcpus" {
  description = "Number of machine CPUs"
  default     = "2"
variable "disksize" {
  description = "disksize"
  default     = "20"
variable "image_name" {
  description = "Image name or regular expression"
  default     = "(?i).*\\.?ubuntu.*16"
variable "master_count" {
  description = "Number of master nodes"
variable "worker_count" {
  description = "Number of worker nodes"
variable "ssh_key" {
  description = "Public SSH key that will be loaded to the machines"
provider "ovc" {
  server_url = "${var.server_url}"
  client_jwt = "${var.client_jwt}"
# Definition of the cloudspace
resource "ovc_cloudspace" "cs" {
  account = "${var.account}"
  name = "${var.cloudspace}"
data "ovc_image" "image" {
  most_recent = true
  name_regex  = "${var.image_name}"
# Definition of the vm to be created with the settings defined in terraform.tfvars
resource "ovc_machine" "kube-mgt" {
  cloudspace_id = "${}"
  image_id      = "${data.ovc_image.image.image_id}"
  memory        = "${var.memory}"
  vcpus         = "${var.vcpus}"
  disksize      = "${var.disksize}"
  name          = "${var.cloudspace}-terraform-kube-mgt"
  description   = "${var.vm_description} - management node"
  userdata      = "users: [{name: ansible, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}]"
output "kube-mgt" {
  value       = "${ovc_port_forwarding.mgt-ssh.public_ip}"
# Master machines
resource "ovc_machine" "k8s-master" {
  count         = "${var.master_count}"
  cloudspace_id = "${}"
  image_id      = "${data.ovc_image.image.image_id}"
  memory        = "${var.memory}"
  vcpus         = "${var.vcpus}"
  disksize      = "${var.disksize}"
  name          = "master-${count.index}-${ovc_cloudspace.cs.location}"
  description   = "${var.vm_description} master"
  userdata      = "users: [{name: ansible, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}, {name: root, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}]"
resource "null_resource" "provision-local" {
  triggers {
      build_number = "${timestamp()}"
  # this is added to ensure connectivety on the management node
  provisioner "remote-exec"{
    inline = ["echo"]
  provisioner "local-exec" {
    command = "ssh-keygen -R [${ovc_port_forwarding.mgt-ssh.public_ip}]:${ovc_port_forwarding.mgt-ssh.public_port} || true"
  provisioner "local-exec" {
    command = "ssh-keyscan -H -p ${ovc_port_forwarding.mgt-ssh.public_port} ${ovc_port_forwarding.mgt-ssh.public_ip} >> ~/.ssh/known_hosts"
  connection {
    type     = "ssh"
    user         = "ansible"
    host     = "${ovc_port_forwarding.mgt-ssh.public_ip}"
    port     = "${ovc_port_forwarding.mgt-ssh.public_port}"
# configure user access on master nodes
resource "null_resource" "provision-k8s-master" {
  count         = "${var.master_count}"
  triggers {
      build_number = "${timestamp()}"
  provisioner "file" {
    content      = "ansible    ALL=(ALL:ALL) NOPASSWD: ALL"
    destination = "/etc/sudoers.d/90-ansible"
  connection {
    type     = "ssh"
    user         = "root"
    host     = "${ovc_machine.k8s-master.*.ip_address[count.index]}"
    bastion_user     = "ansible"
    bastion_host     = "${ovc_port_forwarding.mgt-ssh.public_ip}"
    bastion_port     = "${ovc_port_forwarding.mgt-ssh.public_port}"
## Worker machines
resource "ovc_machine" "k8s-worker" {
  count         = "${var.worker_count}"
  cloudspace_id = "${}"
  image_id      = "${data.ovc_image.image.image_id}"
  memory        = "${var.memory}"
  vcpus         = "${var.vcpus}"
  disksize      = "${var.disksize}"
  name          = "worker-${count.index}-${ovc_cloudspace.cs.location}"
  description   = "${var.vm_description} node"
  userdata      = "users: [{name: ansible, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}, {name: root, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}]"
resource "ovc_disk" "worker-disk" {
  count         = "${var.worker_count}"
  machine_id    = "${ovc_machine.k8s-worker.*.id[count.index]}"
  disk_name     = "data-worker-${count.index}-${ovc_cloudspace.cs.location}"
  description   = "Disk created by terraform"
  size          = 10
  type          = "D"
  ssd_size      = 10
  iops          = 2000
# Port forwards
resource "ovc_port_forwarding" "k8s-master-api" {
  cloudspace_id = "${}"
  public_ip     = "${ovc_cloudspace.cs.external_network_ip}"
  public_port   = 6443
  machine_id    = "${ovc_machine.k8s-master.*.id[0]}"
  local_port    = 6443
  protocol      = "tcp"
resource "ovc_port_forwarding" "k8s-worker-0-http" {
  cloudspace_id = "${}"
  public_ip     = "${ovc_cloudspace.cs.external_network_ip}"
  public_port   = 80
  machine_id    = "${ovc_machine.k8s-worker.*.id[0]}"
  local_port    = 31080
  protocol      = "tcp"
resource "ovc_port_forwarding" "k8s-worker-0-https" {
  cloudspace_id = "${}"
  public_ip     = "${ovc_cloudspace.cs.external_network_ip}"
  public_port   = 443
  machine_id    = "${ovc_machine.k8s-worker.*.id[0]}"
  local_port    = 31443
  protocol      = "tcp"
resource "ovc_port_forwarding" "mongo" {
  cloudspace_id = "${}"
  public_ip     = "${ovc_cloudspace.cs.external_network_ip}"
  public_port   = 27017
  machine_id    = "${ovc_machine.k8s-worker.*.id[0]}"
  local_port    = 30000
  protocol      = "tcp"
resource "ovc_port_forwarding" "mgt-ssh" {
  count = 1
  cloudspace_id = "${}"
  public_ip     = "${ovc_cloudspace.cs.external_network_ip}"
  public_port   = 2222
  machine_id    = "${}"
  local_port    = 22
  protocol      = "tcp"
resource "null_resource" "provision-k8s-worker" {
  count         = "${var.worker_count}"
  triggers {
      build_number = "${timestamp()}"
  # configure access for ansible user
  provisioner "file" {
    content      = "ansible    ALL=(ALL:ALL) NOPASSWD: ALL"
    destination = "/etc/sudoers.d/90-ansible"
  depends_on = ["ovc_disk.worker-disk"]
  # Download script to move the data of the /var directory
  # to the partion /dev/vdb. The system is rebooted
  provisioner "remote-exec" {
    inline = [
      "mkdir -p /home/ansible/scripts",
      "cd /home/ansible/scripts && wget",
      "sudo -S bash /home/ansible/scripts/",
  connection {
    type     = "ssh"
    user         = "root"
    host     = "${ovc_machine.k8s-worker.*.ip_address[count.index]}"
    bastion_user     = "ansible"
    bastion_host     = "${ovc_port_forwarding.mgt-ssh.public_ip}"
    bastion_port     = "${ovc_port_forwarding.mgt-ssh.public_port}"
# Ansible hosts
resource "ansible_host" "kube-mgt" {
    inventory_hostname = "${}"
    groups = ["mgt"]
    vars {
        ansible_user = "ansible"
        ansible_host = "${ovc_port_forwarding.mgt-ssh.public_ip}"
        ansible_port = "${ovc_port_forwarding.mgt-ssh.public_port}"
        ansible_python_interpreter = "/usr/bin/python3"
resource "ansible_host" "kube-master" {
    count = "${var.master_count}"
    inventory_hostname = "${ovc_machine.k8s-master.*.name[count.index]}"
    groups = ["kube-master","etcd","k8s-cluster","k8s-cluster"]
    vars {
        ansible_user = "ansible"
        ansible_host = "${ovc_machine.k8s-master.*.ip_address[count.index]}"
        ansible_python_interpreter = "/usr/bin/python3"
resource "ansible_host" "kube-worker" {
    count = "${var.worker_count}"
    groups = ["kube-node","k8s-cluster","k8s-cluster"]
    inventory_hostname = "${ovc_machine.k8s-worker.*.name[count.index]}"
    vars {
        ansible_user = "ansible"
        ansible_host = "${ovc_machine.k8s-worker.*.ip_address[count.index]}"
        ansible_python_interpreter = "/usr/bin/python3"
resource "ansible_group" "k8s-cluster" {
  inventory_group_name = "k8s-cluster"
  vars {
    ansible_ssh_common_args="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand='ssh -W %h:%p -p ${ovc_port_forwarding.mgt-ssh.public_port} -q ansible@${ovc_port_forwarding.mgt-ssh.public_ip}'"

To deploy the infrastructure, execute:

terraform init
terraform apply

main.ft addresses three types of Terraform providers to build the infrastructure:

# check installed providers
terraform providers
├── provider.ansible
├── provider.null
└── provider.ovc
  • provider.ovc supports resources: ovc_cloudspace, ovc_machine, ovc_disk, ovc_port_forward - being responsible for creating and managing cloudspaces, virtual machines, disks and port forwards correspondingly.

    SSH access to the cluster. Resource ovc_machine is additionally responsible for creating users and uploading SSH keys to the virtual machines. Current configuration provisions your public SSH key provided in TF_VAR_ssh_key for the user ansible on the management node and for users ansible and root on the master and worker nodes. In current configuration you can establish an SSH connection to the management node with ssh -A ansible@???.???.??.? -p 2222 and from there you can ssh to both ansible and root users on the masters and workers. Should you need to configure any additional users/upload keys at the creation time, provide an adequate user_data attribute to the ovc_machine resources of the Terraform configuration (see Terraform OVC Provider Tutorial).

  • provider.null supports null_resource blocks used to provision machines on the cluster with necessary configuration, in this case we configure ansible user access and mount data disks on the Kubernetes nodes. Provisioning of the cluster nodes is only possible via the management node, where SSH port is open. To use the management node as a jump/bastion host we specify the IP address and the user of the management node as bastion-user and bastion-host in the connection block of the provisioner. For more details see Terraform documentation on how to configure a Bastion Host.

  • provider.ansible supports resources ansible_host and ansible_group, being included in the configuration in order to store the Ansible host data in the Terraform state.
    To process the Terraform state into an Ansible inventory we use an Ansible dynamic inventory script. Place the script.
    To use the script clone the repository or download the script to the inventory subdirectory of the project directory, ensure executable permission

    mkdir inventory
    cd inventory
    curl -O
    chmod +x
    cd ../

    Location of is not important (standard recommendation is /etc/ansbile/, here we just place it in the directory dedicated to the Ansible inventory configuration.
    Note that if you execute from beyond the Terraform configuration folder, you should set an environment variable ANSIBLE_TF_DIR (see Configuration) to the Terraform configuration directory

    export ANSIBLE_TF_DIR=~/cluster-demo/terraform

    Now file can be used for Ansible playbooks as a dynamic inventory.

Step 2 - Configure Infrastructure with Ansible

Before running ansible playbooks add an ansible.cfg file to your project directory:

# set paths for Ansible
library       = kubespray/library/
roles_path    = kubespray/roles/:ovc-disk-csi-driver/roles/
# reduce the number of connections to a host
pipelining = True

2.1 - Provision Kubernetes Cluster with Kubespray

To set up a Kubernetes cluster using previously deployed nodes, first clone Kubespray repository to the project directory

git clone

Next, adjust Configurable Parameters in Kubespray. To do this place the sample Kubespray configuration next to the Ansible dynamic inventory file

cp -r kubespray/inventory/sample/group_vars inventory/

Then alter the configuration files inside group_vars folder. For this example we only changed cluster_name variable inside group_vars/k8s-cluster/k8s-cluster.yml to fetch cluster name from an environmental variable

cluster_name: "{{ lookup('env','cluster_name') }}"

Finally, execute the playbook cluster.yml of Kubespray

ansible-playbook -i inventory/ kubespray/cluster.yml -v -b

2.2 - Add Kubernetes Aluster Configuration to the Local Machine

Here we provide an Ansible playbook meant to download a Kubernetes cluster config from a master node and place it locally.
This step is required to gain control over the cluster from your local machine.

- hosts: kube-master
  become: yes
  gather_facts: no
    cluster_name: "{{ lookup('env','cluster_name') }}"
    local_config_path: "~/.kube/{{ cluster_name }}.yml"
    remote_config_path: /root/.kube/config
    - name: Fetch k8s config file
        src: "{{ remote_config_path }}"
        dest: "{{ local_config_path }}"
        flat: yes
- hosts: localhost
  gather_facts: no
    cluster_name: "{{ lookup('env','cluster_name') }}"
    kubernetes_port: 6443
    mnt: "{{ groups['mgt'][0] }}"
    cluster_url: "https://{{ hostvars[mnt]['ansible_host'] }}:{{ kubernetes_port }}"
    - name: Load k8s config
        cmd: |
          echo $KUBECONFIG >> KUBECONFIG.log
          kubectl config set clusters.{{ cluster_name }}.insecure-skip-tls-verify true
          kubectl config unset clusters.{{ cluster_name }}.certificate-authority-data
          kubectl config set clusters.{{ cluster_name }}.server {{ cluster_url }}

Execute the Ansible playbook:

ansible-playbook -i inventory/ get-cluster-config.yml

2.3 - Install CSI Driver with Ansible

In this example we install CSI driver from the localhost. If you want to execute the playbook from a remote host (e.g. a master node of the cluster), please define an inventory and replace localhost with your hostname

To install an OVC CSI driver on the previously deployed Kubernetes cluster, execute the Ansible role published in the OVC CSI driver repository.

The playbook is performing the following steps:

  • Fetch cluster config from the master node and load in locally as Kubernetes config
  • Create Kubernetes secrets
  • Create and apply namespaces
  • Apply driver config
  • Apply app config

Clone repository to the project directory to be able to use csi-driver role:

git clone

Create file install-ovc-csi-driver.yml with your ansible playbook:

- hosts: localhost
    state: installed
    server_url: "{{ lookup('env','TF_VAR_server_url') }}"
    account: "{{ lookup('env','TF_VAR_account') }}"
    client_jwt: "{{ lookup('env','TF_VAR_client_jwt') }}"
    - {role: csi-driver}

First block of the playbook is responsible for downloading the Kubernetes cluster configuration from a master node, making necessary modifications and placing them locally. This step is necessary to gain access to the cluster from the local machine. Second block installs the CSI driver itself.

Execute the Ansible playbook:

ansible-playbook install-ovc-csi-driver.yml

The advanced CSI driver configuration can be achieved by setting variables in the playbook ovc-disk-csi-driver/roles/csi-driver and adjusting the sample configuration given in ovc-disk-csi-driver/roles/csi-driver/templates.

To verify that the persistent volume driver was installed correctly, list pods for the namespace ovc-disk-csi

kubectl get pods -n ovc-disk-csi -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP                NODE
ovc-disk-csi-driver-attacher-0      2/2     Running   0          5m3s       worker-0-ch-lug-dc01-002
ovc-disk-csi-driver-driver-c8s5g    2/2     Running   0          5m2s   worker-2-ch-lug-dc01-002
ovc-disk-csi-driver-driver-pmzpw    2/2     Running   0          5m2s   worker-1-ch-lug-dc01-002
ovc-disk-csi-driver-driver-sgtfp    2/2     Running   0          5m2s   worker-0-ch-lug-dc01-002
ovc-disk-csi-driver-provisioner-0   2/2     Running   0          5m2s      worker-1-ch-lug-dc01-002

Ensure running: an attacher pod, a provisioner pod and for each Kubernetes worker - a driver-***** pod.

2.4 - Install Applications with Helm Chart

To give an example of an application that can be deployed on a Kubernetes cluster and make use of the CSI driver lets install a MongoDB replica set with persistent storage.
For this task we will use Helm - the package manager for Kubernetes. Example below is based on this sample configuration.
Note that the secure configuration for Tiller lays beyond the scope of this tutorial. See Securing your Helm Installation documentation to provide security when deploying on a production environment.

  • Install Helm on your local machine.

  • Create namespace and Tiller service account for the MongoDB appliance. Tiller is the Helm server.

    kubectl create namespace mongodb-appliance
    kubectl create serviceaccount tiller --namespace mongodb-appliance
  • Configure the Tiller access in the namespace with role-tiller.yaml

    kind: Role
    name: tiller-manager
    namespace: mongodb-appliance
    - apiGroups: ["", "batch", "extensions", "apps"]
    resources: ["*"]
    verbs: ["*"]

    Create tiller on the cluster

    kubectl create -f role-tiller.yaml
  • Bind role to the particular namespace with config rolebinding-tiller.yaml

    kind: RoleBinding
      name: tiller-binding
      namespace: mongodb-appliance
    - kind: ServiceAccount
      name: tiller
      namespace: mongodb-appliance
      kind: Role
      name: tiller-manager

    Apply config

    kubectl create -f rolebinding-tiller.yaml
  • Install mongodb-appliance namespace with helm init

    helm init --service-account tiller --tiller-namespace mongodb-appliance
  • Deploy MongoDB replica set on the namespace mongodb-appliance by installing the stable/mongodb chart. The flags are set to enable and configure a replica set, expose the service publicly, predefine a database, enable authentication with password. More configuration options for stable/mongodb chart find here.

    $ helm install stable/mongodb --name mongo-chart --tiller-namespace mongodb-appliance \
    --namespace mongodb-appliance --set usePassword=true,mongodbRootPassword=,\
    ==> v1/Pod(related)
    NAME                             READY  STATUS             RESTARTS  AGE
    mongo-chart-mongodb-arbiter-0    0/1    ContainerCreating  0         0s
    mongo-chart-mongodb-primary-0    0/1    Pending            0         0s
    mongo-chart-mongodb-secondary-0  0/1    Pending            0         0s
    mongo-chart-mongodb-secondary-1  0/1    Pending            0         0s
    ==> v1/Secret
    NAME                 TYPE    DATA  AGE
    mongo-chart-mongodb  Opaque  3     0s
    ==> v1/Service
    NAME                          TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)          AGE
    mongo-chart-mongodb           NodePort         27017:30000/TCP  0s
    mongo-chart-mongodb-headless  ClusterIP  None                 27017/TCP        0s
    ==> v1/StatefulSet
    NAME                           READY  AGE
    mongo-chart-mongodb-arbiter    0/1    0s
    mongo-chart-mongodb-primary    0/1    0s
    mongo-chart-mongodb-secondary  0/2    0s
    ** Please be patient while the chart is being deployed **
    MongoDB can be accessed via port 27017 on the following DNS name from within your cluster:
    To get the root password run:
      export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb-appliance mongo-chart-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
    To get the password for "gig" run:
      export MONGODB_PASSWORD=$(kubectl get secret --namespace mongodb-appliance mongo-chart-mongodb -o jsonpath="{.data.mongodb-password}" | base64 --decode)
    To connect to your database run the following command:
      kubectl run --namespace mongodb-appliance mongo-chart-mongodb-client --rm --tty -i --restart='Never' --image bitnami/mongodb --command -- mongo admin --host mongo-chart-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
    To connect to your database from outside the cluster execute the following commands:
      export NODE_IP=$(kubectl get nodes --namespace mongodb-appliance -o jsonpath="{.items[0].status.addresses[0].address}")
      export NODE_PORT=$(kubectl get --namespace mongodb-appliance -o jsonpath="{.spec.ports[0].nodePort}" services mongo-chart-mongodb)
      mongo --host $NODE_IP --port $NODE_PORT --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

    The output reports all pods being created and lists several useful commands to connect to a mongodb instance. When pods in the namespace mongodb-appliance are installed correctly their status is Running

    $ kubectl get po -n mongodb-appliance -o wide
    NAME                               READY   STATUS    RESTARTS   AGE     IP             NODE
    mongo-chart-mongodb-replicaset-0   1/1     Running   0          4m15s   worker-2-ch-lug-dc01-002  
    mongo-chart-mongodb-replicaset-1   1/1     Running   0          2m48s   worker-1-ch-lug-dc01-002
    mongo-chart-mongodb-replicaset-2   1/1     Running   0          111s    worker-0-ch-lug-dc01-002
    tiller-deploy-867bc6989c-6gk8g     1/1     Running   0          4m45s    worker-0-ch-lug-dc01-002

    When persistest volumes are mounted on each pod, PVC should be listed on the namespace

    $ kubectl get pvc -n mongodb-appliance
    NAME                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                        AGE
    datadir-mongo-chart-mongodb-primary-0     Bound    pvc-00bf4414-287d-491d-a702-43c6a5b9f7d5   8Gi        RWO     14m
    datadir-mongo-chart-mongodb-secondary-0   Bound    pvc-e0a660ad-6a80-4064-9b16-bac08884ee0b   8Gi        RWO     14m
    datadir-mongo-chart-mongodb-secondary-1   Bound    pvc-0df0ebf6-baab-420e-897b-3a8a53bcac29   8Gi        RWO     14m

If you use Kubernetes WEB UI, you can check Overview page to see the workload status and deployment information

Check out the Persistent Volumes page to see all the volumes created



  • In case the worker hosting CSI driver is deleted, Kuberentes will create it again on another worker. Note that Kubernetes will not move the pods if the node is unreachable as a result of stopping, deleting or having network issues. This logic ensures against split-brain issues that can occur if the lost worker comes back online after the pods were recreated on another node (see Documentation).

    In order to force moving persistent storage and CSI driver pods to another worker, you should delete the unreachable worker from the cluster

    kubectl delete node worker1

    You can check nodes that are unreachable (state not ready) by executing

    kubectl get nodes


This tutorial covers how to set up a Kubernetes cluster with an arbitrary number of master and worker nodes with a persistent storage managed by a CSI driver. To illustrate the deployment of applications on a Kuberneres cluster with Helm we used stable/mongodb chart, configured as a replica set.

Currently deployment of the Kubernetes cluster is supported within a single G8. The future direction of the development is building a Geo-redundant Kubernetes cluster that can be deployed across several G8's.

Table of Contents