Kubernetes and Persistent Volumes on GIG-Edge Cloud: how to get started

Introduction

The tutorial targets GIG Edge Cloud users, willing to deploy applications on a redundant Kubernetes cluster.

Kubernetes is a cluster orchestration system widely used to deploy and manage containerized applications at scale. Kubernetes ensures high availability provided through container replication and service redundancy. It means if one of the compute nodes of the cluster fails, Kubernetes's self-healing guaranties that the pods deployed on the node can escape the faulty node and be redeployed on one of the healthy nodes.

While containerized applications are typically stateless and pods running them can be easily redeployed, moved or multiplied, actual data should be stored in a persistent back-end to be accessible for the pods across the cluster. To add a persistent back-end to the Kubernetes cluster, you can define a storage object - Persistent Volume (PV). Any pod can use Persistent Volume Claim (PVC) to obtain access to the PV. GIG Edge Cloud uses OVC CSI driver to dynamically provision PV on the cluster. OVC CSI driver is a plugin used to employ an OVC data disk to act as a persistent storage of a Kubernetes cluster deployed on top of GIG Edge Cloud.

This tutorial shows an example on how to set up a Kubernetes cluster with a persistent volume using the OpenvCloud CSI driver on top of the GIG Edge Cloud using Terraform and Ansible.

Prerequisites

  • Ansible v2.7.15 or higher
  • Terraform v0.11.14 Terraform providers:
    • OVC provider. See this tutorial to learn how to install and use Terraform with OVC provider.
    • Ansible provider. To install each provider download the latest binary and place it at ~/.terraform.d/plugins/.
  • Kubernetes v1.15.0 or higher
  • Optional: Helm to install packages on the Kubernetes cluster.
  • Make sure you have installed python and python's netaddr.
  • Itsyou.online Account. Authentication to a G8 (an infrastructure platform unit of the GIG Edge Cloud) is done with itsyou.online identity server. In order to gain access you should have an itsyou.online account. To interact with a G8 you need to generate a valid JWT token, which can be derived from your Application ID (or client ID) and Secret pair. To generate these credentials on the page of your itsyou.online account create a new API key. In order to generate a token use the following curl request:

    CLIENT_ID="<Your Client ID>"
    CLIENT_SECRET="<Your Client Secret>"
    export TF_VAR_client_jwt=$(curl --silent -d 'grant_type=client_credentials&client_id='"$CLIENT_ID"'&client_secret='"$CLIENT_SECRET"'&response_type=id_token&scope=offline_access' https://itsyou.online/v1/oauth/access_token)

Architecture

The design used for the Kubernetes cluster deployed in a dedicated Virtual Data Center (VDC) on GIG Edge Cloud is shown below. It is assumed in this example that terraform, ansible and some of kubectl commands are executed from your localhost.

picture

  • GIG Edge Cloud API is used by Terraform to deploy a cloudspace with nodes/configurable
  • Nodes of the cluster are deployed on a dedicated cloudspace. All nodes in the cloudspace have IP addresses on the internal network managed by the Virtual Firewall(VFW) of the cloudspace. The VFW has a routable IP address and it is responsible to provide the Internet access and port forwarding for the machines on its private network.
  • Port 22 of the management node is forwarded to 2222 of the VFW to use it as an SSH Bastion Host for Ansible. After reaching the bastion host Ansible will access other nodes via double ssh tunneling. Ansible playbooks are responsible to install the Kuberneres cluster itself and CSI driver on the cluster nodes.
  • Port 6443 (standard port of Kubernetes API server) of the master node is forwarded to 6334 of VFW to enable access to the Kubernetes cluster.
  • Physical storage instance is an OVC data disk attached to one of the workers. The persistent storage on the cluster is managed by the OVC CSI driver. The CSI driver consists of several pods responsible for creating, provisioning and attaching the OVC disk to the correct host. Persistent storage is created on the cluster by means of a Storage class, configured to use the OVC CSI driver as a back-end for the PVC. If the worker node hosting the persistent disk is removed from the cluster, CSI driver pods will be redeployed and the persistent disk will be reattached to one of the available workers.
  • Application runs in a Kubernetes pod and can consist or several containers sharing access to the storage instance. To enable read/write operations to the storage, mount the persistent volume claim (PVC) to your app.

Configuration

Basic configuration of the setup can be provided by setting environment variables.

## Terraform vars
# server_url
export TF_VAR_server_url="<G8's url>"

# client_jwt
export TF_VAR_client_jwt="<Your JWT token>"

# account
export TF_VAR_account="<Your OVC Account>"

# cloudspace name
export TF_VAR_cloudspace="<New cloudspace name>"


# cluster nodes
export TF_VAR_master_count=1
export TF_VAR_worker_count=3

# node resources
export TF_VAR_disksize=10
export TF_VAR_memory=2048
export TF_VAR_vcpus=2

# ssh key to load on all nodes
export TF_VAR_ssh_key="<Your SSH KEY>"

## Dynamic inventory config
export ANSIBLE_TF_DIR=terraform

# cluster name
export cluster_name="<Name of your cluster>"

## Path where Kubernetes cluster config file will be saved
export KUBECONFIG=~/.kube/$cluster_name

Step 1 - deploy nodes with Terraform

Choose your project directory, for example ~/cluster-demo/. In your project directory create subdirectory for the Terraform configuration ~/cluster-demo/terraform, where you place your Terraform files: main.tf, variables.tf, terraform.tfvars. See below an example of the Terraform configuration for a Kubernetes cluster.

variables.tf
    variable "client_jwt" {
  description = "Client jwt created on itsyou.online with client id and secret"
}
variable "server_url" {
  description = "API server URL"
}
variable "account" {
  description = "Account name"
}
variable "cloudspace" {
  description = "Cloudspace name"
  default = "demo"
}
variable "vm_description" {
  description = "Description of the VM"
  default = "kubernetes cluster"
}
variable "memory" {
  description = "Machine memory"
  default     = "2048"
}
variable "vcpus" {
  description = "Number of machine CPUs"
  default     = "2"
}
variable "disksize" {
  description = "disksize"
  default     = "20"
}
variable "image_name" {
  description = "Image name or regular expression"
  default     = "(?i).*\\.?ubuntu.*16"
}
variable "master_count" {
  description = "Number of master nodes"
}
variable "worker_count" {
  description = "Number of worker nodes"
}
variable "ssh_key" {
  description = "Public SSH key that will be loaded to the machines"
}
  
main.tf
    provider "ovc" {
  server_url = "${var.server_url}"
  client_jwt = "${var.client_jwt}"
}
# Definition of the cloudspace
resource "ovc_cloudspace" "cs" {
  account = "${var.account}"
  name = "${var.cloudspace}"
}
data "ovc_image" "image" {
  most_recent = true
  name_regex  = "${var.image_name}"
}
# Definition of the vm to be created with the settings defined in terraform.tfvars
resource "ovc_machine" "kube-mgt" {
  cloudspace_id = "${ovc_cloudspace.cs.id}"
  image_id      = "${data.ovc_image.image.image_id}"
  memory        = "${var.memory}"
  vcpus         = "${var.vcpus}"
  disksize      = "${var.disksize}"
  name          = "${var.cloudspace}-terraform-kube-mgt"
  description   = "${var.vm_description} - management node"
  userdata      = "users: [{name: ansible, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}]"
}
output "kube-mgt" {
  value       = "${ovc_port_forwarding.mgt-ssh.public_ip}"
}
# Master machines
resource "ovc_machine" "k8s-master" {
  count         = "${var.master_count}"
  cloudspace_id = "${ovc_cloudspace.cs.id}"
  image_id      = "${data.ovc_image.image.image_id}"
  memory        = "${var.memory}"
  vcpus         = "${var.vcpus}"
  disksize      = "${var.disksize}"
  name          = "master-${count.index}-${ovc_cloudspace.cs.location}"
  description   = "${var.vm_description} master"
  userdata      = "users: [{name: ansible, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}, {name: root, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}]"
}
resource "null_resource" "provision-local" {
  triggers {
      build_number = "${timestamp()}"
  }
  # this is added to ensure connectivety on the management node
  provisioner "remote-exec"{
    inline = ["echo"]
  }
  provisioner "local-exec" {
    command = "ssh-keygen -R [${ovc_port_forwarding.mgt-ssh.public_ip}]:${ovc_port_forwarding.mgt-ssh.public_port} || true"
  }
  provisioner "local-exec" {
    command = "ssh-keyscan -H -p ${ovc_port_forwarding.mgt-ssh.public_port} ${ovc_port_forwarding.mgt-ssh.public_ip} >> ~/.ssh/known_hosts"
  }
  connection {
    type     = "ssh"
    user         = "ansible"
    host     = "${ovc_port_forwarding.mgt-ssh.public_ip}"
    port     = "${ovc_port_forwarding.mgt-ssh.public_port}"
  }
} # configure user access on master nodes resource "null_resource" "provision-k8s-master" { count = "${var.master_count}" triggers { build_number = "${timestamp()}" } provisioner "file" { content = "ansible ALL=(ALL:ALL) NOPASSWD: ALL" destination = "/etc/sudoers.d/90-ansible" } connection { type = "ssh" user = "root" host = "${ovc_machine.k8s-master.*.ip_address[count.index]}" bastion_user = "ansible" bastion_host = "${ovc_port_forwarding.mgt-ssh.public_ip}" bastion_port = "${ovc_port_forwarding.mgt-ssh.public_port}" } } ## Worker machines resource "ovc_machine" "k8s-worker" { count = "${var.worker_count}" cloudspace_id = "${ovc_cloudspace.cs.id}" image_id = "${data.ovc_image.image.image_id}" memory = "${var.memory}" vcpus = "${var.vcpus}" disksize = "${var.disksize}" name = "worker-${count.index}-${ovc_cloudspace.cs.location}" description = "${var.vm_description} node" userdata = "users: [{name: ansible, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}, {name: root, shell: /bin/bash, ssh-authorized-keys: [${var.ssh_key}]}]" } resource "ovc_disk" "worker-disk" { count = "${var.worker_count}" machine_id = "${ovc_machine.k8s-worker.*.id[count.index]}" disk_name = "data-worker-${count.index}-${ovc_cloudspace.cs.location}" description = "Disk created by terraform" size = 10 type = "D" ssd_size = 10 iops = 2000 } # Port forwards resource "ovc_port_forwarding" "k8s-master-api" { cloudspace_id = "${ovc_cloudspace.cs.id}" public_ip = "${ovc_cloudspace.cs.external_network_ip}" public_port = 6443 machine_id = "${ovc_machine.k8s-master.*.id[0]}" local_port = 6443 protocol = "tcp" } resource "ovc_port_forwarding" "k8s-worker-0-http" { cloudspace_id = "${ovc_cloudspace.cs.id}" public_ip = "${ovc_cloudspace.cs.external_network_ip}" public_port = 80 machine_id = "${ovc_machine.k8s-worker.*.id[0]}" local_port = 31080 protocol = "tcp" } resource "ovc_port_forwarding" "k8s-worker-0-https" { cloudspace_id = "${ovc_cloudspace.cs.id}" public_ip = "${ovc_cloudspace.cs.external_network_ip}" public_port = 443 machine_id = "${ovc_machine.k8s-worker.*.id[0]}" local_port = 31443 protocol = "tcp" } resource "ovc_port_forwarding" "mongo" { cloudspace_id = "${ovc_cloudspace.cs.id}" public_ip = "${ovc_cloudspace.cs.external_network_ip}" public_port = 27017 machine_id = "${ovc_machine.k8s-worker.*.id[0]}" local_port = 30000 protocol = "tcp" } resource "ovc_port_forwarding" "mgt-ssh" { count = 1 cloudspace_id = "${ovc_cloudspace.cs.id}" public_ip = "${ovc_cloudspace.cs.external_network_ip}" public_port = 2222 machine_id = "${ovc_machine.kube-mgt.id}" local_port = 22 protocol = "tcp" } resource "null_resource" "provision-k8s-worker" { count = "${var.worker_count}" triggers { build_number = "${timestamp()}" } # configure access for ansible user provisioner "file" { content = "ansible ALL=(ALL:ALL) NOPASSWD: ALL" destination = "/etc/sudoers.d/90-ansible" } depends_on = ["ovc_disk.worker-disk"] # Download script to move the data of the /var directory # to the partion /dev/vdb. The system is rebooted provisioner "remote-exec" { inline = [ "mkdir -p /home/ansible/scripts", "cd /home/ansible/scripts && curl -O https://raw.githubusercontent.com/gig-tech/tutorial-help-scripts/master/kubernetes-cluster-deployment/move-var.sh", "sudo -S bash /home/ansible/scripts/move-var.sh", ] } connection { type = "ssh" user = "root" host = "${ovc_machine.k8s-worker.*.ip_address[count.index]}" bastion_user = "ansible" bastion_host = "${ovc_port_forwarding.mgt-ssh.public_ip}" bastion_port = "${ovc_port_forwarding.mgt-ssh.public_port}" } } # Ansible hosts resource "ansible_host" "kube-mgt" { inventory_hostname = "${ovc_machine.kube-mgt.name}" groups = ["mgt"] vars { ansible_user = "ansible" ansible_host = "${ovc_port_forwarding.mgt-ssh.public_ip}" ansible_port = "${ovc_port_forwarding.mgt-ssh.public_port}" ansible_python_interpreter = "/usr/bin/python3" } } resource "ansible_host" "kube-master" { count = "${var.master_count}" inventory_hostname = "${ovc_machine.k8s-master.*.name[count.index]}" groups = ["kube-master","etcd","k8s-cluster","k8s-cluster"] vars { ansible_user = "ansible" ansible_host = "${ovc_machine.k8s-master.*.ip_address[count.index]}" ansible_python_interpreter = "/usr/bin/python3" } } resource "ansible_host" "kube-worker" { count = "${var.worker_count}" groups = ["kube-node","k8s-cluster","k8s-cluster"] inventory_hostname = "${ovc_machine.k8s-worker.*.name[count.index]}" vars { ansible_user = "ansible" ansible_host = "${ovc_machine.k8s-worker.*.ip_address[count.index]}" ansible_python_interpreter = "/usr/bin/python3" } } resource "ansible_group" "k8s-cluster" { inventory_group_name = "k8s-cluster" vars { ansible_ssh_common_args="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand='ssh -W %h:%p -p ${ovc_port_forwarding.mgt-ssh.public_port} -q ansible@${ovc_port_forwarding.mgt-ssh.public_ip}'" } }

To deploy the infrastructure, execute:

terraform init
terraform apply

main.ft addresses three types of Terraform providers to build the infrastructure:

# check installed providers
terraform providers
.
├── provider.ansible
├── provider.null
└── provider.ovc
  • provider.ovc supports resources: ovc_cloudspace, ovc_machine, ovc_disk, ovc_port_forward - being responsible for creating and managing cloudspaces, virtual machines, disks and port forwards correspondingly.

    SSH access to the cluster. Resource ovc_machine is additionally responsible for creating users and uploading SSH keys to the virtual machines. Current configuration provisions your public SSH key provided in TF_VAR_ssh_key for the user ansible on the management node and for users ansible and root on the master and worker nodes. In current configuration you can establish an SSH connection to the management node with ssh -A ansible@???.???.??.? -p 2222 and from there you can ssh to both ansible and root users on the masters and workers. Should you need to configure any additional users/upload keys at the creation time, provide an adequate user_data attribute to the ovc_machine resources of the Terraform configuration (see Terraform OVC Provider Tutorial).

  • provider.null supports null_resource blocks used to provision machines on the cluster with necessary configuration, in this case we configure ansible user access and mount data disks on the Kubernetes nodes. Provisioning of the cluster nodes is only possible via the management node, where SSH port is open. To use the management node as a jump/bastion host we specify the IP address and the user of the management node as bastion-user and bastion-host in the connection block of the provisioner. For more details see Terraform documentation on how to configure a Bastion Host.

  • provider.ansible supports resources ansible_host and ansible_group, being included in the configuration in order to store the Ansible host data in the Terraform state. To process the Terraform state into an Ansible inventory we use an Ansible dynamic inventory script. Place the script. To use the script clone the repository or download the script to the inventory subdirectory of the project directory, ensure executable permission

    mkdir inventory
    cd inventory
    curl -O https://raw.githubusercontent.com/nbering/terraform-inventory/master/terraform.py
    chmod +x terraform.py
    cd ../

    Location of terraform.py is not important (standard recommendation is /etc/ansbile/terraform.py), here we just place it in the directory dedicated to the Ansible inventory configuration. Note that if you execute terraform.py from beyond the Terraform configuration folder, you should set an environment variable ANSIBLE_TF_DIR (see Configuration) to the Terraform configuration directory

    export ANSIBLE_TF_DIR=~/cluster-demo/terraform

    Now terraform.py file can be used for Ansible playbooks as a dynamic inventory.

Step 2 - configure infrastructure with Ansible

Before running ansible playbooks add an ansible.cfg file to your project directory:

ansible.cfg
    [defaults]
# set paths for Ansible
library       = kubespray/library/
roles_path    = kubespray/roles/:ovc-disk-csi-driver/roles/
[ssh_connection]
# reduce the number of connections to a host
pipelining = True
  

2.1 - provision Kubernetes cluster with Kubespray

To set up a Kubernetes cluster using previously deployed nodes, first clone Kubespray repository to the project directory

git clone https://github.com/kubernetes-sigs/kubespray.git

Next, adjust Configurable Parameters in Kubespray. To do this place the sample Kubespray configuration next to the Ansible dynamic inventory file terraform.py

cp -r kubespray/inventory/sample/group_vars inventory/

Then alter the configuration files inside group_vars folder. For this example we only changed cluster_name variable inside group_vars/k8s-cluster/k8s-cluster.yml to fetch cluster name from an environmental variable

cluster_name: "{{ lookup('env','cluster_name') }}"

Finally, execute the playbook cluster.yml of Kubespray

ansible-playbook -i inventory/terraform.py kubespray/cluster.yml -v -b

2.2 - add Kubernetes cluster configuration to the local machine

Here we provide an Ansible playbook meant to download a Kubernetes cluster config from a master node and place it locally. This step is required to gain control over the cluster from your local machine.

get-cluster-config.yml
    - hosts: kube-master
  become: yes
  gather_facts: no
  vars:
    cluster_name: "{{ lookup('env','cluster_name') }}"
    local_config_path: "~/.kube/{{ cluster_name }}.yml"
    remote_config_path: /root/.kube/config
  tasks:
    - name: Fetch k8s config file
      fetch:
        src: "{{ remote_config_path }}"
        dest: "{{ local_config_path }}"
        flat: yes
- hosts: localhost
  gather_facts: no
  vars:
    cluster_name: "{{ lookup('env','cluster_name') }}"
    kubernetes_port: 6443
    mnt: "{{ groups['mgt'][0] }}"
    cluster_url: "https://{{ hostvars[mnt]['ansible_host'] }}:{{ kubernetes_port }}"
  tasks:
    - name: Load k8s config
      shell:
        cmd: |
          echo $KUBECONFIG >> KUBECONFIG.log
          kubectl config set clusters.{{ cluster_name }}.insecure-skip-tls-verify true
          kubectl config unset clusters.{{ cluster_name }}.certificate-authority-data
          kubectl config set clusters.{{ cluster_name }}.server {{ cluster_url }}
  

Execute the Ansible playbook:

ansible-playbook -i inventory/terraform.py get-cluster-config.yml

2.3 - install CSI driver with Ansible

In this example we install CSI driver from the localhost. If you want to execute the playbook from a remote host (e.g. a master node of the cluster), please define an inventory and replace localhost with your hostname

To install an OVC CSI driver on the previously deployed Kubernetes cluster, execute the Ansible role published in the OVC CSI driver repository.

The playbook is performing the following steps:

  • Fetch cluster config from the master node and load in locally as Kubernetes config
  • Create Kubernetes secrets
  • Create and apply namespaces
  • Apply driver config
  • Apply app config

Clone repository to the project directory to be able to use csi-driver role:

git clone https://github.com/gig-tech/ovc-disk-csi-driver.git

Create file install-ovc-csi-driver.yml with your ansible playbook:

install-ovc-csi-driver.yml
    - hosts: localhost
  vars:
    state: installed
    server_url: "{{ lookup('env','TF_VAR_server_url') }}"
    account: "{{ lookup('env','TF_VAR_account') }}"
    client_jwt: "{{ lookup('env','TF_VAR_client_jwt') }}"
  roles:
    - {role: csi-driver}
  

First block of the playbook is responsible for downloading the Kubernetes cluster configuration from a master node, making necessary modifications and placing them locally. This step is necessary to gain access to the cluster from the local machine. Second block installs the CSI driver itself.

Execute the Ansible playbook:

ansible-playbook install-ovc-csi-driver.yml

The advanced CSI driver configuration can be achieved by setting variables in the playbook ovc-disk-csi-driver/roles/csi-driver and adjusting the sample configuration given in ovc-disk-csi-driver/roles/csi-driver/templates.

To verify that the persistent volume driver was installed correctly, list pods for the namespace ovc-disk-csi

kubectl get pods -n ovc-disk-csi -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP                NODE
ovc-disk-csi-driver-attacher-0      2/2     Running   0          5m3s   10.233.96.4       worker-0-ch-lug-dc01-002
ovc-disk-csi-driver-driver-c8s5g    2/2     Running   0          5m2s   192.168.103.253   worker-2-ch-lug-dc01-002
ovc-disk-csi-driver-driver-pmzpw    2/2     Running   0          5m2s   192.168.103.250   worker-1-ch-lug-dc01-002
ovc-disk-csi-driver-driver-sgtfp    2/2     Running   0          5m2s   192.168.103.254   worker-0-ch-lug-dc01-002
ovc-disk-csi-driver-provisioner-0   2/2     Running   0          5m2s   10.233.100.4      worker-1-ch-lug-dc01-002

Ensure running: an attacher pod, a provisioner pod and for each Kubernetes worker - a driver-***** pod.

2.4 - install applications with Helm Chart

To give an example of an application that can be deployed on a Kubernetes cluster and make use of the CSI driver lets install a MongoDB replica set with persistent storage. For this task we will use Helm - the package manager for Kubernetes. Example below is based on this sample configuration. Note that the secure configuration for Tiller lays beyond the scope of this tutorial. See Securing your Helm Installation documentation to provide security when deploying on a production environment.

  • Install Helm on your local machine.
  • Create namespace and Tiller service account for the MongoDB appliance. Tiller is the Helm server.

    kubectl create namespace mongodb-appliance
    kubectl create serviceaccount tiller --namespace mongodb-appliance
  • Configure the Tiller access in the namespace with role-tiller.yaml

    role-tiller.yaml
      kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: tiller-manager
      namespace: mongodb-appliance
    rules:
    - apiGroups: ["", "batch", "extensions", "apps"]
      resources: [""]
      verbs: [""]
      

    Create tiller on the cluster

    kubectl create -f role-tiller.yaml
  • Bind role to the particular namespace with config rolebinding-tiller.yaml

    rolebinding-tiller.yaml
      kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: tiller-binding
      namespace: mongodb-appliance
    subjects:
    - kind: ServiceAccount
      name: tiller
      namespace: mongodb-appliance
    roleRef:
      kind: Role
      name: tiller-manager
      apiGroup: rbac.authorization.k8s.io
    

    Apply config

    kubectl create -f rolebinding-tiller.yaml
  • Install mongodb-appliance namespace with helm init

    helm init --service-account tiller --tiller-namespace mongodb-appliance
  • Deploy MongoDB replica set on the namespace mongodb-appliance by installing the stable/mongodb chart. The flags are set to enable and configure a replica set, expose the service publicly, predefine a database, enable authentication with password. More configuration options for stable/mongodb chart find here.

    $ helm install stable/mongodb --name mongo-chart --tiller-namespace mongodb-appliance \
      --namespace mongodb-appliance --set usePassword=true,mongodbRootPassword=<Your Root Password>,\
      mongodbDatabase=<Your Database>,mongodbUsername=<Your User Name>,mongodbPassword=<Your User Password>,\
      service.type=NodePort,service.nodePort=30000,replicaSet.enabled=true,replicaSet.replicas.secondary=2,\
      replicaSet.pdb.enabled=false
    RESOURCES:
    ==> v1/Pod(related)
    NAME                             READY  STATUS             RESTARTS  AGE
    mongo-chart-mongodb-arbiter-0    0/1    ContainerCreating  0         0s
    mongo-chart-mongodb-primary-0    0/1    Pending            0         0s
    mongo-chart-mongodb-secondary-0  0/1    Pending            0         0s
    mongo-chart-mongodb-secondary-1  0/1    Pending            0         0s
    ==> v1/Secret
    NAME                 TYPE    DATA  AGE
    mongo-chart-mongodb  Opaque  3     0s
    ==> v1/Service
    NAME                          TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)          AGE
    mongo-chart-mongodb           NodePort   10.233.30.67  <none>       27017:30000/TCP  0s
    mongo-chart-mongodb-headless  ClusterIP  None          <none>       27017/TCP        0s
    ==> v1/StatefulSet
    NAME                           READY  AGE
    mongo-chart-mongodb-arbiter    0/1    0s
    mongo-chart-mongodb-primary    0/1    0s
    mongo-chart-mongodb-secondary  0/2    0s
    NOTES:
    ** Please be patient while the chart is being deployed **
    MongoDB can be accessed via port 27017 on the following DNS name from within your cluster:
        mongo-chart-mongodb.mongodb-appliance.svc.cluster.local
    To get the root password run:
        export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb-appliance mongo-chart-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
    To get the password for "gig" run:
        export MONGODB_PASSWORD=$(kubectl get secret --namespace mongodb-appliance mongo-chart-mongodb -o jsonpath="{.data.mongodb-password}" | base64 --decode)
    To connect to your database run the following command:
        kubectl run --namespace mongodb-appliance mongo-chart-mongodb-client --rm --tty -i --restart='Never' --image bitnami/mongodb --command -- mongo admin --host mongo-chart-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
    To connect to your database from outside the cluster execute the following commands:
        export NODE_IP=$(kubectl get nodes --namespace mongodb-appliance -o jsonpath="{.items[0].status.addresses[0].address}")
        export NODE_PORT=$(kubectl get --namespace mongodb-appliance -o jsonpath="{.spec.ports[0].nodePort}" services mongo-chart-mongodb)
        mongo --host $NODE_IP --port $NODE_PORT --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

    The output reports all pods being created and lists several useful commands to connect to a mongodb instance. When pods in the namespace mongodb-appliance are installed correctly their status is Running

    $ kubectl get po -n mongodb-appliance -o wide
    NAME                               READY   STATUS    RESTARTS   AGE     IP             NODE
    mongo-chart-mongodb-replicaset-0   1/1     Running   0          4m15s   10.233.107.8   worker-2-ch-lug-dc01-002  
    mongo-chart-mongodb-replicaset-1   1/1     Running   0          2m48s   10.233.100.6   worker-1-ch-lug-dc01-002
    mongo-chart-mongodb-replicaset-2   1/1     Running   0          111s    10.233.96.8    worker-0-ch-lug-dc01-002
    tiller-deploy-867bc6989c-6gk8g     1/1     Running   0          4m45s   10.233.96.7    worker-0-ch-lug-dc01-002

    When persistest volumes are mounted on each pod, PVC should be listed on the namespace

    $ kubectl get pvc -n mongodb-appliance
    NAME                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                        AGE
    datadir-mongo-chart-mongodb-primary-0     Bound    pvc-00bf4414-287d-491d-a702-43c6a5b9f7d5   8Gi        RWO            gig.tech-ovc-k8s-storage-provider   14m
    datadir-mongo-chart-mongodb-secondary-0   Bound    pvc-e0a660ad-6a80-4064-9b16-bac08884ee0b   8Gi        RWO            gig.tech-ovc-k8s-storage-provider   14m
    datadir-mongo-chart-mongodb-secondary-1   Bound    pvc-0df0ebf6-baab-420e-897b-3a8a53bcac29   8Gi        RWO            gig.tech-ovc-k8s-storage-provider   14m

If you use Kubernetes WEB UI, you can check Overview page to see the workload status and deployment information

picture Check out the Persistent Volumes page to see all the volumes created

picture

Notes

  • In case the worker hosting CSI driver is deleted, Kuberentes will create it again on another worker. Note that Kubernetes will not move the pods if the node is unreachable as a result of stopping, deleting or having network issues. This logic ensures against split-brain issues that can occur if the lost worker comes back online after the pods were recreated on another node (see Documentation).

    In order to force moving persistent storage and CSI driver pods to another worker, you should delete the unreachable worker from the cluster

    kubectl delete node worker1

    You can check nodes that are unreachable (state not ready) by executing

    kubectl get nodes

Conclusion

This tutorial covers how to set up a Kubernetes cluster with an arbitrary number of master and worker nodes with a persistent storage managed by a CSI driver. To illustrate the deployment of applications on a Kuberneres cluster with Helm we used stable/mongodb chart, configured as a replica set.

Currently deployment of the Kubernetes cluster is supported within a single G8. The future direction of the development is building a Geo-redundant Kubernetes cluster that can be deployed across several G8's.