Updated: Aug 1, 2020
No one can deny the fact that containers are the most favorite platform to develop and deploy applications on but there should be something that allows us to manage the containers, scale up, scale down the number of replicas of container instances, expose containers as service, provide load balancing etc. For all this , there is definitely a way and that is kubernetes.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014.
The purpose of Kubernetes is to host your application in the form of containers.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
In short kubernetes is a container orchestration tool.
Kubernetes Cluster Components
When we deploy kubernetes, we get a cluster. A cluster is nothing but number of nodes( servers) working together to fulfill the whole purpose.
There are worker nodes which do the real task of running the application in the form of pods.Pods are the smallest object in kubernetes world. One pod can hold one or more containers.
The second type of nodes are the master nodes which as the name indicates, manage the cluster as a whole, take important decisions about the cluster, related to scheduling of pods, maintaining the cluster to a desired state, responding to cluster events and take appropriate actions. This type of node host the control plane components of the cluster.
Lets move on and learn about different components in a kubernetes cluster.
Control Plane Components
Etcd is a consistent and highly-available key-value store used as Kubernetes’ backing store for all cluster data. Basically, it is a database for Kubernetes data and represents the state of the cluster.
API Server is what exposes Kubernetes API, as its name suggests. It is the main management point of the entire cluster. It acts as the bridge between various components disseminating information and commands. In simple terms, it is the front end of the Kubernetes control pane. All cluster components talk to each other with the help of kube-api server. Lets assume we need to create a pod.
Controller Manager is responsible for regulating the state of the cluster and performing routine tasks. For example, the replication controller ensures that the number of replicas defined for a service matches the number currently deployed on the cluster. Another example is the endpoints controller adjusting, well, endpoints by watching for changes in Etcd.
Scheduler Service is what assigns workloads to nodes. This is how it does
Reads the workload's operating requirements
Analyze the current infrastructure environment
Place the workload on an acceptable node(s)
Docker is used in order to run your containers, other container runtime such as rkt,containerd can be used as an alternative to docker.
Kubelet is the main contact point for each node with the cluster group, relaying to and from control pane services (master).
Proxy is used for maintaining network rules and performing connection forwarding. This is what enables the Kubernetes service abstraction (DNS).It allows necessary network rules to enable containers running on worker nodes to reach each other.
Pod is the smallest object that runs in a kubernetes cluster. It runs containers inside it and provides the real functionality.
Deploy Kubernetes Cluster on CentOS 7
Now we will see how to install kubernetes on Linux and get the things working.We will use Centos 7 (64 bit ) machines hosted on virtual box and will create a 3 node kubernetes cluster.
Lab consists of below nodes
Perform the following things on all the 3 nodes.
For this open the below file and change SELINUX=disabled, save and exit the file and reboot the servers.
For this , open the file /etc/fstab , comment the line corresponding to swap memory and execute
Stop and disable firewalld service
For this execute below commands
systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
Do the module related configurations
[root@node1 ~]# cat /etc/sysctl.conf | grep -v "#" net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_nonlocal_bind=1 [root@node1 ~]# modprobe br_netfilter
Download docker-ce repo and install docker
[root@node1 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo [root@node1 ~]# yum install docker-ce -y [root@node1 ~]# systemctl start docker [root@node1 ~]# systemctl enable docker [root@node1 ~]# systemctl status docker
Create the kubernetes repo as shown below
[root@node1 ~]# cat /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
Install the kubernetes related rpms
[root@node1 ~]# yum install kubeadm kubelet kubectl -y [root@node1 ~]# systemctl enable kubelet
On the master node, execute the below command to pull the required images and then initialize the cluster.
[root@node1 ~]# kubeadm config images pull [root@node1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
Now execute the below commands on master node(node1). Note that message on the terminal suggests to use a regular user but we will continue with root user as this is a lab setup.
[root@node1 ~]# mkdir -p $HOME/.kube [root@node1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@node1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@node1 ~]# kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
Now on both the worker nodes execute the join command that looks something like below.You will have a different token than as shown below
[root@node3 kubelet]# kubeadm join 192.168.1.11:6443 --token 5wg1pf.sth5crfotr1gi7qy \ > --discovery-token-ca-cert-hash sha256:d6ac3de0e879525bfb704854d69b9b92c8458c4cfe95ce64def5d3f3a11fb800
After executing above command on both the servers. Go to the master node and execute below command.
[root@node1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node1.example.com Ready master 20m v1.18.3 node2.example.com Ready <none> 37s v1.18.3 node3.example.com Ready <none> 11m v1.18.3
Oh yeah ! Smile please we have just deployed our first kubernetes cluster . In the next article we will start digging in more in to what kubernetes really is ...