Deploy your first Kubernetes cluster with Terraform and manage it with Rancher

Stefano Cucchiella DevOPS Leave a Comment

Updated on 10/02/2021

In the era of DevOps and micro-services, Kubernetes is playing an important role in the IaaS ecosystem, enabling flexibility and simplification of the application’s underlying platform implementation.
However, this is true to certain extent. Meaning, only when you have a wide-range of tools that allow you to control, monitor and scale your infrastructure upon your application needs.

In this guide I will describe how to create a basic Kubernetes cluster in City Cloud using Terraform and Rancher/RKE provider and importing the newly created cluster in Rancher.

Rancher is a Kubernetes Cluster Manager and it can be installed into a Kubernetes Cluster which itself can be provisioned by Rancher RKE (Rancher Kubernetes Engine) or, within Terraform, by the RKE community provider.

Note. Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language, or optionally JSON. Source 

Prerequisites

You need a City Cloud account.

*New user? Get $100 worth of usage for free!

Overview

In this example, we will follow the below steps

Step 1 –  Create the Terraform configuration

In this step we will create the Terraform configuration to deploy our nodes and install the Rancher Server.

We will create the following VMs:

– 1 VM for the Rancher Server

– 1+2 VMs for a Kubernetes cluster: 1 Master (etcd+control_plane) and 2 Worker nodes 

with 4 vCPU, 4 GB RAM and 50GB of disk size.

Also, we will use RancherOS, the smallest and easiest way to run Docker, even in production. 

The Rancher example code folder containing the Terraform configuration is available in our GitHub repository.

Step 2 – Source your Openstack project RC-file

Download your Openstack project RC-file from the control panel (How-to?).

OS_REGION_NAME=***
OS_USER_DOMAIN_NAME=***
OS_PROJECT_NAME=***
OS_AUTH_VERSION=***
OS_IDENTITY_API_VERSION=***
OS_PASSWORD=***
OS_AUTH_URL=***
OS_USERNAME=***
OS_TENANT_NAME=***
OS_PROJECT_DOMAIN_NAME=***

Source the file with `source openstack.rc`

$ source openstack.rc

Terraform will automatically read and use the environment variables when needed.

More info about how Terraform uses the environment variables here.

Step 3 – Apply the configuration

Once you are ready with the configuration, it’s time to initialise Terraform and apply the configuration.

Initialise Terraform in the same directory where the configuration files are stored by running:

$ terraform init
...
Terraform has been successfully initialized!

We can now apply the configuration using the following command:

$ terraform apply
...
rke_cluster.cluster: Creation complete after 4m46s [id=01300fd9-4630-487a-b22a-ba525b7deacb]
local_file.kube_cluster_yaml: Creating...
local_file.kube_cluster_yaml: Creation complete after 0s [id=efb10a0d09892dbe00d7a7bbd21eac3d10c1fe37]
 
Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
 
Outputs:
 
Rancher_Server_IP = "https://_._._._"

The terraform.tfstate file is generated and used by Terraform to store and maintain the state of your infrastructure as well as the kube_config_cluster.yaml for the connection to the Kubernetes Rancher cluster.

Note. terraform.tfstate* as well as kube_config_cluster.yaml file, contain sensitive information.

Step 4 – Verify the cluster

Now that the 1+2 configuration is successfully applied, use the following commands to check cluster’s connectivity:

$ kubectl --kubeconfig kube_config_cluster.yml get nodes    
 NAME             STATUS   ROLES               AGE     VERSION
 91.123.203.112   Ready    worker              3m56s   v1.18.6
 91.123.203.127   Ready    controlplane,etcd   3m56s   v1.18.6
 91.123.203.84    Ready    worker              3m51s   v1.18.6

and status:

$ kubectl --kubeconfig=kube_config_cluster.yml get pods --all-namespaces
 NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
 ingress-nginx          default-http-backend-598b7d7dbd-z6qnl        1/1     Running     0          34m
 ingress-nginx          nginx-ingress-controller-2x7qr               1/1     Running     0          34m
 ingress-nginx          nginx-ingress-controller-7rmd5               1/1     Running     0          34m
 kube-system            canal-8p7ls                                  2/2     Running     0          34m
 kube-system            canal-h246j                                  2/2     Running     0          34m
 kube-system            canal-pq2js                                  2/2     Running     0          34m
 kube-system            coredns-849545576b-6w98j                     1/1     Running     0          34m
 kube-system            coredns-849545576b-n68v7                     1/1     Running     0          33m
 kube-system            coredns-autoscaler-5dcd676cbd-fvxxs          1/1     Running     0          34m
 kube-system            metrics-server-697746ff48-v2pkm              1/1     Running     0          34m
 kube-system            rke-coredns-addon-deploy-job-tnszg           0/1     Completed   0          34m
 kube-system            rke-ingress-controller-deploy-job-h72sz      0/1     Completed   0          34m
 kube-system            rke-metrics-addon-deploy-job-k8xhw           0/1     Completed   0          34m
 kube-system            rke-network-plugin-deploy-job-z64ht          0/1     Completed   0          34m
 kube-system            rke-user-addon-deploy-job-76ljk              0/1     Completed   0          34m
 kube-system            rke-user-includes-addons-deploy-job-w6qv5    0/1     Completed   0          20m
 kubernetes-dashboard   dashboard-metrics-scraper-78f5d9f487-t4c2b   1/1     Running     0          33m
 kubernetes-dashboard   kubernetes-dashboard-59ddbcfdcb-zm7d8        1/1     Running     0          33m

Step 5 – Access the Kubernetes Dashboard

In this example we deployed the Kubernetes Dashboard alongside the Rancher dashboard.

To get access to the Kubernetes dashboard you first need to retrieve your cluster token using the following command:

$ kubectl --kubeconfig kube_config_cluster.yml -n kube-system describe secret $(kubectl --kubeconfig kube_config_cluster.yml -n kube-system get secret | grep admin-user | awk '{print $1}') | grep ^token: | awk '{ print $2 }'

Note. To find out more about how to configure and use Bearer Tokens, please refer to the Kubernetes Authentication documentation. 

Copy the command output and launch kubectl proxy with:

$ kubectl --kubeconfig kube_config_cluster.yml proxy
Starting to serve on 127.0.0.1:8001

and visit:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Paste the token generated earlier and get access to the Kubernetes Dashboard.

Step 6 – Access the Rancher UI

Open the Rancher Server IP URL returned by the Terraform configuration:

Apply complete! Resources: 20 added, 0 changed, 0 destroyed.

Outputs:

Rancher_Server_IP = https:// _ . _ . _ . _ 

As this is just an example and no real certificates have been used, you need to use a browser that will allow you to override the certificate warnings as for example Firefox or Safari.

Set a password for the admin user and press Continue.

You have successfully installed the Rancher Management server and its dashboard:

Step 7 – Import your cluster nodes

Once in the Dashboard, create a new Kubernetes cluster from ⚙️ Existing nodes

Enter your cluster name and as In-Tree Cloud Provider select Custom.

Press Next.

Select etcd and Control Plane role and copy the command.

Login into the cluster’s Master VM using the floating IPs prompted in Step 4.

ssh rancher@<vm_floating_ip>

and run the command prompted by the UX interface, similar to the one shown below:

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v --server  --token  --ca-checksum  [--etcd --controlplane, --worker]

Rancher will then start registering the VM into your newly created Rancher-managed cluster according to the selected role.

A notification bar like the one below will be also prompted.

🔁 Repeat these steps on the Worker nodes, selecting Worker as role and by running the related prompted command in each VM.

Once done, you will then be able to see all resources allocated to your cluster.

🎉 Congratulations!

You have just created your first Kubernetes Cluster and imported it into Rancher, one of the most complete open-source Kubernetes Manager.

This basic example presented an easy way to deploy your Kubernetes cluster using Openstack resources in CityCloud, and then manage them via the Rancher Management server.

Cluster creation can also be automated and our Rancher as a Service (RaaS) solution comes with a fully automated way to deploy and handle your cluster via our managed Rancher Management server available in Public and Compliant Cloud.

Happy clustering!