Quick Start Kubernetes (K8s)with Docker Desktop
In the past, we used to develop applications as monoliths. The components were tightly coupled, and they all ran in a single computer. Scaling the application vertically was possible, but it wasn’t on horizontally.
According to the needs of applications (high availability, increasing client numbers) monolith applications were divided into smaller parts, with the responsibility of smaller teams. The configuration of these smaller programs was different and they were using different libraries. So running the small applications together in a single operating system was not a choice. Containers helped us to solve this problem. A containerized application is an application that runs into a container. A container image is an immutable static file that includes all the necessary files, to run an application inside a container.
The image consists of layers. Think of the operating system as a lightweight one. Not all the commands are available, only needed commands, that make our application run, are available. For example, we will be using alpine(minimal Linux distribution) for our tutorial.
But when the applications are containerized, we faced another problem. These smaller applications need to communicate with each other. And as the numbers started increasing, managing these apps was not an easy task. And an orchestrator is needed to manage all these containers and their communications. Kubernetes and Docker are the two collaborating technologies, we use Docker for containerization and Kubernetes for orchestration. There are also some other orchestrators such as Docker Swarm, but its market share is smaller and Kubernetes is the preferred technology.
Kubernetes provides service discovery, horizontal scaling, self-healing, load balancing, automated rollouts, and rollbacks. A Kubernetes cluster is a collection of computers, called Nodes. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. According to Kubernetes documentation, pods are defined as the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod always runs on a Node and a Node can have multiple pods.
It will be useful to define some terms before we start the Kubernetes tutorial.
Kube-proxy (Kubernetes Service Proxy) handles cluster networking and load-balances network traffic for each container. Kubelet is an agent that runs on each node in the cluster and is responsible for the healthiness of the application. When the application terminates, it restarts the application.
The Control Plane manages the worker nodes and the Pods in the cluster. The Application or Data Plane is a group of nodes where pods run. The API Server is the front end, for the control plane, it has a restful endpoint. ETCD is the distributed key-value data store. It is used by Kubernetes as the main data store. Kube-Scheduler selects an optimal node for pods to run. Both for newly created pods or other unscheduled pods. Controller-Manager performs controller processes such as node controller, job controller, service account controllers.
Cron Jobs perform scheduled actions such as reporting backup and so on.
When a ReplicaSet is deployed, you can see randomly created pod names and we are not able to predict their names. But with StatefulSet, the pods are created with order, with regular names, and can be accessed over DNS.
ReplicaSets allow us to scale up or to scale down the pods. A Deployment provides declarative updates for Pods and ReplicaSets. With Deployments, we can scale replica pods and we can roll back or roll out, our deployments to a specific version. A DaemonSet ensures that all nodes run a copy of a pod. ConfigMaps can store data in key-value pairs and the values could be environment variables, command-line arguments, or configuration files. These data can be consumed by pods. The Pod is rarely used alone but is used with ReplicaSet, Deployment, Statefulset, DaemonSet Job, Cron Job.
Secrets can be defined as a small amount of sensitive data, usually a password or a token (base64-encoded strings). PersistentVolume is the physical volume, for storing persistent data. Persistent Volume Claim is the request to create a PV.
Services are used to make a pod accessible by the network. Types of services are:
- ClusterIP: It is the default type and service is accessible from inside the cluster only.
- NodePort: Service is accessible from outside the cluster. The default port ranges are (30000–32767).
- LoadBalancer: In addition to NodePort, this will also create an external load balance.
- ExternalName: Service is redirected to an external DNS name, by a DNS CNAME record.
Ingress exposes HTTP and HTTPS routes from outside the cluster, to services within the cluster. With a LoadBalancer Service, we can access our microservices. Ingress is used with Ingress Controller (ingress-nginx) to access Ingress resources.
For more please check out Kubernetes documentation.
Now we are going to create a pod and deploy it to k8s. But before we create a pod, we will create a restful web service by using Spring Boot. We will keep the code as simple, as it can be.
The source code is available on GitHub. The project runs on port 1999 and the actuator plugin is also added please check the pom.xml file.
- Docker Desktop for Windows
- kubectl (Kubernetes command-line interface tool)
Now we are ready to containerize our application and deploy it to the Docker Hub. If you don’t have a Docker account, please go and register. And make sure that you have installed Docker Desktop for windows.
From the settings menu select Kubernetes and check the Enable Kubernetes option. Docker Desktop will automatically install Kubernetes.
If you have successfully installed kubectl, you can verify it by typing the kubectl version from the command prompt. Now we will dockerize our kubedemo application. Create a Docker file as shown.
Place the code in the docker file. Make sure that the file is outside the kubedemo folder. You can check GitHub for the directories.
Now by using the command prompt, go to the folder where your Dockerfile is located. And run the commands below to build your docker image. Punctuation is important. “greenredblue” is the account name and kubedemo is the application name. The version name is v1. Don’t forget to change yours.
docker build -t “greenredblue/kubedemo:v1” .
And now we are ready to push our image to the Docker Hub.
docker push “greenredblue/kubedemo:v1”
Now we are going to create a yaml file and declare our first pod. Kubernetes Yaml is one of the ways to serialize an object into a human-readable text file. Think of it as a JSON file.
YAML uses a Python-style indentation. Instead of tab characters, whitespaces are used. So be careful about the indentations. You can use online yaml validators if you wish.
Now we are ready to deploy our first pod file to Kubernetes. From the command prompt, go to the location where kubedemo. yaml is located. And run the code below as.
With the kubectl apply command, we deployed our pod to the Kubernetes environment. After a time of waiting, we run the “kubectl get all” command. With it, we can list all pods, services, deployments in the default namespace. Or we could just use kubectl get po instead. “po” here is the shortcut for pods.
As we have no services created we will be manually exposing our first pod outside by the command “kubectl port-forward kubedemo 1881:1999”. Our application uses port 1999 and we redirected it to port 1881. The order is important. When you close the command prompt the application will not be exposed to outside.
For other Kubernetes commands please check the Kubernetes documentation. And for the complete sample check out my repository.
Here in this tutorial, I tried to summarize some key concepts in the Kubernetes environment and demonstrated a simple tutorial for the new beginners. I hope you enjoyed it.
Thanks for reading!!!