KUBERNETES: THE KOPS WAY (PART-2)










In the last article, we learnt about basics of kubernetes and KOPS. Now here in this article we will create a 3 node kubernetes cluster using KOPS.


KOPS IN ACTION

  • Launch an ec2 instance of the type Amazon Linux, t2.micro and execute the below commands.

yum install epel-release -y
yum install python-pip -y
pip install awscli
aws configure
[root@ip-172-31-42-188 ~]# aws configure
AWS Access Key ID [None]: 
AWS Secret Access Key [None
Default region name [None]: ap-south-1
Default output format [None]: 
# Here we have not provided keys because we are going to use IAM role

  • Create a role with the below permissions and attach to EC2 instance.

AmazonEC2FullAccess

AmazonRoute53FullAccess

AmazonS3FullAccess

IAMFullAccess

AmazonVPCFullAccess


For demo purpose we will attach an admin role to ec2 instance.

  • Install KOPS

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f /kops-linux-amd64
chmod +x kops-linux-amd64
mv kops-linux-amd64 /usr/local/bin/kops
 


  • Install kubectl

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
kubectl version --client
  • Create an amazon S3 bucket for the Kubernetes state store like below

Login to the AWS console -- Go to services -- Go to S3 -- Create bucket


Kops needs a “state store” to store configuration information of the cluster. For example, how many nodes, instance type of each node, and Kubernetes version. The state is stored during the initial cluster creation. Any subsequent changes to the cluster are also persisted to this store as well. As of publication, Amazon S3 is the only supported storage mechanism. Create a S3 bucket and pass that to the kops CLI during cluster creation.


  • DNS registration with Route53

A top-level domain or a subdomain is required to create the cluster. This domain allows the worker nodes to discover the master and the master to discover all the etcd servers. This is also needed for kubectl to be able to talk directly with the master. So we are going to create a private hosted zone in Route53 and use default VPC for this.

For this we need to go to AWS console -- Services -- Route53 -- Hosted Zone

-- Create hosted zone -- Here we will use example.com as domain.

  • Edit .bashrc file of the ec2-user

This is done to make the values consistent at all time. Open the .bashrc file located in the home directory of the ec2-user and add below lines.


vi .bashrc
# Add below lines
export KOPS_STATE_STORE=s3:// kops-demo-kubernetes-linuxadvise
export KOPS_CLUSTER_NAME=linuxadvise-k8.example.com
export KOPS_STATE_STORE=s3://kops-demo-kubernetes-linuxadvise
# save and exit and execute below command
source .bashrc

  • Create ssh key pair to login to cluster nodes

Use the command ssh-keygen and select the default values.

and execute below command to import it as a kops secret.

kops create secret --name linuxadvise-k8.example.com sshpublickey admin -i ~/.ssh/id_rsa.pub

  • Create the kubernetes cluster

The Kops CLI can be used to create a highly available cluster, with multiple master nodes spread across multiple Availability Zones. Workers can be spread across multiple zones as well. Some of the tasks that happen behind the scene during cluster creation are:

  1. Provisioning EC2 instances

  2. Setting up AWS resources such as networks, Auto Scaling groups, IAM users, and security groups

  3. Installing Kubernetes.

  • Start the Kubernetes cluster using the following command:

kops create cluster --state=${KOPS_STATE_STORE} --node-count=2 --master-size=t2.micro --node-size=t2.micro --zones=ap-south-1a,ap-south-1b --name=${KOPS_CLUSTER_NAME} --dns private --master-count 1
kops update cluster --name linuxadvise-k8.example.com –yes


  • After sometime ec2 instances will be launched , wait till they are ready.


  • Validate the Cluster

Wait for at least 15 min and run the below command to validate the cluster. Output should be as below.


  • Access the kubernetes master node

That is really cool, our cluster is ready and we are now going to ssh to master node 😊

ssh -i ~/.ssh/id_rsa admin@api.linuxadvise-k8.example.com


All set , we are able to ssh to the master and run kubectl command ,as of now we don’t have any resources in the cluster, please go ahead and launch a pod 😊

CLUSTER MANAGEMENT WITH KOPS

kops edit ig nodes change minSize and maxSize to 0
kops get ig- to get master node name
kops edit ig - change min and max size to 0
kops update cluster --yes

DELETE A CLUSTER

kops delete cluster --yes

That is it guys... go and play with your kubernetes cluster :)






131 views0 comments
 

Subscribe Form

©2020 by Linux Advise