Running a HA Kubernetes Cluster on Nirvana Cloud
10 min read
As our digital ecosystem continues to evolve, the need for efficient and scalable solutions to manage applications becomes increasingly critical. Kubernetes, an open-source orchestration platform, has revolutionized the way users and companies deploy, scale, and manage applications in a cloud environment.
- A Nirvana Labs account: To create a Nirvana Labs account, navigate to nirvanalabs.io and sign up. If you need a step-by-step guide on how to create an account, please refer to our documentation.
- A basic understanding of Kubernetes: This guide is intended for developers and engineers who already have a general familiarity with Kubernetes.
Step 1: Defining Cloud Resources
To effectively create the necessary cloud resources for a Kubernetes cluster, it's essential to define the workloads it will support.
In this guide, we'll deploy a lightweight, fully High Availability (HA) RKE cluster.
Rancher Kubernetes Engine (RKE) is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
At the time of writing, the Nirvana Managed Kubernetes Service is under development and will be released soon.
To establish a fully HA cluster, the following Nirvana Cloud resources are required:
- 1 x VPC (Virtual Private Cloud)
- 1 Public IP (for the Rancher Dashboard VM)
- 1 x Virtual Machine (for the Rancher Frontend)
- 4 CPUs
- 16 GB RAM
- 60 GB NVMe storage
- 3 x Virtual Machines (for Kubernetes Nodes)
- 4 CPUs each
- 8 GB RAM each
- 60 GB NVMe storage each
Please note that the resource requirements can vary depending on the size and demands of your cluster. The cluster configuration detailed in this tutorial is considered small.
Now, let's examine what these resources entail for our cluster setup:
The "Rancher Frontend" VM will serve as the management center for our Kubernetes cluster. Note that RKE doesn't come with Rancher by default.
We will use a public IP to access the "Rancher Dashboard."
The "Kubernetes Nodes" comprise our Kubernetes cluster. To maintain full HA, we require each node to function both as a controller and a worker, which we will explore in greater detail later on in the guide. All VMs will be situated within the same VPC, sharing the same subnet.
Step 2: Creating Cloud Resources
Now that we have outlined the required resources for our cluster, we can proceed with their creation.
Navigate to your "Cloud Dashboard."
Click on "Create a Virtual Private Cloud."
First, we must choose a region. This region will determine where the VPC and all subsequent resources attached to said VPC are physically located.
Second, we need to change the auto-generated name of the VPC to "nirvana-kube-demo" for clarity and ease of use.
Once that's done, we click on "Create a Virtual Private Cloud" to establish the new VPC.
On the "Cloud Dashboard," we can now see the VPC we just created. With the VPC set up, it's time to move on to creating VMs. The first machine we will create is the "Rancher Frontend" machine. To do this, we click "Create a Virtual Machine."
We first select the same region as our VPC.
We then name the VM "rancher-frontend-vm" to maintain consistency and make it easily identifiable within our infrastructure.
The next step is to assign the VM to the correct VPC – in this case, it's "nirvana-kube-demo."
We ensure that we check the box to enable a public IP, which is necessary for remote SSH and access to the "Rancher Dashboard."
Following that, we specify our desired source IP in the "Quick Select IP" field.
We make sure to check the box for port 22, which is the default port for SSH connections.
To enable secure SSH access to the machine, we also need to upload or paste our public SSH key into the relevant field.
Now, let's move on to configuring the resources for our "Rancher Frontend" VM.
We adjust the settings to allocate the desired amount of CPUs, RAM, and storage for the "Rancher Frontend" VM, according to the specifications necessary for our environment.
Once the resources are configured, we click on "Create Virtual Machine."
Shortly after initiating the creation process, we should observe the status of our virtual machine change to "Provisioning." After the provisioning process is complete and the system initializes, the status should then change to "Healthy," indicating that the VM is running and ready to be used.
Once the machine becomes active and its status is marked as "Healthy," indicating that it is up and running, we will be able to remotely connect to it using SSH (Secure Shell). The default user provided for the VM is 'ubuntu'.
To SSH into the machine, we will need to execute the following command from our local terminal:
Awesome! Let's run our first command:
sudo apt update && sudo apt upgrade -y && sudo apt -y autoremove && sudo apt -y autoclean
This installs all updates.
Next, we want to install docker. We can do that by running the following command:
curl -fsSL https://get.docker.com -o get-docker.sh
This will install docker on our machine. We can make sure that it has been successfully installed by running:
sudo docker --version
We should get an output with the installed version. Let's now move on to installing Rancher.
Step 3: Installing and Configuring Rancher
We are now ready to install Rancher on our VM.
Run the following command:
sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
This will spin up our Rancher frontend server and the dashboard will be served on port 443.
The docker container usually takes 1-2 minutes to come up and for the dashboard to be available. To confirm that the Rancher server container is running and to monitor its progress as it starts up, you can execute the following command:
sudo docker ps -a
This command will list all the currently running Docker containers. You should be able to see the Rancher container in the list, along with details such as container ID, image used, command, created time, status, ports, and names.
To access the dashboard, we need to add a security rule that allows access to port 443. We click on the VPC we created. To add the security rule, we click on "Add Security Rule."
For the source, we use our public IP (either of our home office or our internal network). As the destination IP, we select the private IP of the "Rancher Frontend" VM (in this case, ending in xxx.130). For the destination port, we enter 443. The protocol should be set to TCP.
Once the security rule has been created, we should see it reflected in the settings as described.
Now, let's navigate to the Rancher Dashboard.
After waiting for 1-2 minutes, we can check to see if the Rancher dashboard is available by navigating to:
If all went well, you should be greeted by the following screen:
In order to finish setting up your Rancher install, you will need to find your docker container ID, and run the following command on your VM (grab the docker container id by running "sudo docker ps"):
sudo docker logs <CONTAINER ID> 2>&1 | grep "Bootstrap Password:"
Paste the password from your terminal into the dashboard and set up your user.
Next, we need to make sure that we are using the private IP as the kube API since our nodes will be talking to each other within our subnet and not over the public internet.
Change the "Server URL" to the private IP of the Rancher Frontend VM (in this case, ending in xxx.130). We are done with Rancher for the time being.
Step 4: Bootstrapping our Kubernetes Nodes
Let's provision and bootstrap our Kubernetes nodes.
To provision and bootstrap our Kubernetes nodes, we use the "Rancher Frontend" VM as a jump host or "jump box" to establish SSH connections to the Kubernetes nodes that do not have public IP addresses. This allows secure access to the nodes within our private network.
We first need to create an SSH key on the Rancher Frontend VM using the following command:
We choose the default location and name for the key. Now let's copy the key:
We copy the contents and use this key as the SSH public key in our Kubernetes nodes.
We create the Kubernetes node VMs the same way that we created the Rancher Frontend VM. Since we won't be adding a public IP to these nodes, we will not check that "Public IP" box.
Once the Kubernetes node VMs are "Healthy", we can SSH into each using the Rancher Frontend VM as a jump box. We do this much the same way as when we SSHed into the Rancher Frontend VM.
From inside the Rancher Frontend VM, we run the following:
Let's update each of the VMs:
sudo apt update && sudo apt upgrade -y && sudo apt -y autoremove && sudo apt -y autoclean
After the nodes have been updated, we can turn our attention back to our Rancher dashboard.
Open the hamburger menu and click on "Cluster Management"On the "Clusters" page, click on "Create" and choose "Custom."
We will name our cluster and keep other settings as default.
On the next screen we see the registration command that we will run on the Kubernetes nodes.
We want to make sure and select "Insecure" since we do not have an SSL cert at the moment.
Finally, we copy the registration command and paste it into each of the Kubernetes nodes. We will see the nodes start to appear in the dashboard once the command has been run.
After some time, a init node will be chosen and configs will be pushed to the other nodes from this node.
And finally, we will see all nodes are healthy. We can now click on "Explore" to dive into the cluster.
The Rancher Dashboard presents a user-friendly interface for our newly created RKE (Rancher Kubernetes Engine) cluster. To interact with the cluster using `kubectl`, which is the command-line tool for Kubernetes, you can download the kube config file directly from the dashboard.
To download the kube config file, look for the "Kubeconfig file" icon in the top right corner of the Rancher Dashboard. Clicking on this icon will give you the option to download the configuration file, which you can then use with `kubectl` on your local machine or another environment. This kube config file contains all the necessary details to establish communication with your Kubernetes cluster.
Once the kube config file is downloaded and properly configured on your system, you can run `kubectl` commands to manage your workloads and interact with your cluster.
Our High Availability (HA) Kubernetes Cluster is now fully set up and ready to handle both test and production workloads! You can now proceed with deploying applications, managing resources, and optimizing the cluster according to your needs. Remember to follow best practices for security and monitoring to maintain the health and performance of your Kubernetes environment.
Thank you for following this tutorial. If you have any questions or comments, please don't hesitate to reach out to our support team!
Senior Cloud Architect @ Nirvana Labs