Venkateshsandupatla
9 min readOct 8, 2020

--

Before stepping into the project, let's get an overview of EKS 😉

What is AMAZON EKS 🤔

Before EKS lets see what is KUBERNETES?

Kubernetes (K8’S) is an open-source system for automating deployment, scaling, and management of containerized applications.
Kubernetes is a program or tool which keeps on monitoring the containers, if any container goes down, behind the scenes it requests docker to create another container with the same data.
In docker, we have a tool, Docker swarm which works the same as Kubernetes but Docker swarm is only applicable to docker, but Kubernetes is for all containers {podman, cri-o, docker}.

Elastic Kubernetes Service?

Amazon EKS is a managed service that helps make it easier to run Kubernetes on AWS. Through EKS, organizations can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes. Simply put, EKS is a managed containers-as-a-service (CaaS). EKS creates master and slave nodes and sets up the entire Kubernetes cluster.

For connecting to AWS EKS we have different ways such as WebUI, CLI, TERRAFORM...

WHAT EKS DO?

Before diving into EKS :
we have to download eksctl in our base os!

EKSCTL is a simple CLI tool for creating clusters on EKS by just running a single command.

WHY EKS?

There is no need to launch the Master node, Behind the scenes, EKS launches it so, no need to worry about the master node. It creates the slave nodes by using ec2 instances. We use eksctl command to work on EKS.
For creating worker nodes we can write the YAML file. We run the command by using eksctl .

eksctl contacts EKS and behind the scenes EKS contacts AWS EC2 for launching instances(nodes).
EKS has Disaster recovery management. suppose we are using Mumbai region if we want to launch 3 worker nodes then eks launch nodes in 1a,1b,1c respectively.

Now lets deep dive into our project.

Our Objectives:

  • Create a Kubernetes cluster using AWS EKS.
  • Integrate EKS with EC2, ELB, EBS, EFS.
  • Deploying WordPress & Mysql on top of AWS eks
  • Using Helm: Installing & integrating Prometheus and Grafana

1) Creating k8s cluster on AWS eks

prerequisites:- Install AWS CLI in our base os and login to our AWS account with Access Key and Secret Key.Set up eksctl in our Base os.

Now create a yml file for creating eks cluster

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: mycluster
region: ap-south-1
nodeGroups:
- name: ng1
desiredCapacity: 2
instanceType: t2.micro
ssh:
publicKeyName: hi
- name: ng-mixed
minSize: 1
maxSize: 3
instancesDistribution:
maxPrice: 0.010
instanceTypes: ["t2.micro"] # At least one instance type should be specified
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
ssh:
publicKeyName: hi

In the above code, we have given 2 node groups and one spot instance for a node in Mumbai region

eksctl create -f filename.yml

Just by running the above command, eks creates an entire cluster with master & slave node. Simple right!🥰

To see the cluster → eksctl get cluster

To see Node Groups → eksctl get node groups cluster cluster name

The output of the above commands are in the below screenshot

Now Our cluster is ready,

Let's deploy WordPress and MySQL and attach PVC with storage class from EBS

Why should we attach PVC 🙄

If we write anything in our pod, if we restart the pod we will lose entire data. so if we attach PVC then we can retrieve our data even if we restart the pods.

So let's write a yml file for deploying our WordPress app.

# wordpress deployment apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim

Now let's write yml file for MySQL deployment

# mysql deploymentapiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim

Now let's write a file — kustomization file in this we can give what are the files we have to run. so there is no need to run all files, just by running one file we can launch our entire WordPress app

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: mysql-pass
literals:
- password=root
resources:
- mysqldml
- wpd.yml

Now run command → kubectl create -k.

Just by this simple command, our entire WordPress app will be launch

To check just run this → kubectl get all

By Copying our load balancer IP in google we can connect to our WordPress

But there is a challenge/problem

If we scale the pods PVC won't able to connect because ebs is from a particular availability zone (AZ) so we cant add PVC to other pods 😫

We attached PVC to our pods, PVC take storage from our AWS ebs volume so if we want to create another pod and attach the same ebs volume /PVC we cant 😣

so we have to create one more PVC/ebs volume.

So what might be the solution for our challenge? 🧐

If we have centralized storage then we can attach to any number of pods

Here comes the role of EFS (Elastic File System ) 😍

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability.

So let's create EFS from our AWS WebUI

Before launching WordPress & MySQL pods we have to do a small set up.

We have to launch EFS-PROVISIONER, STORAGE, RBAC pods!

But WHY 🤔

EFS_PROVISIONER:-

The EFS Provisioner is deployed as a Pod that has a container with access to an AWS EFS file system.

The container reads a ConfigMap containing the File system ID, Amazon Region of the EFS file system, and the name of the provisioner.

STORAGE CLASS:-

A StorageClass resource is defined whose provisioner attribute determines which volume plugin is used for provisioning a PersistentVolume (PV).

In this case, the StorageClass specifies the EFS Provisioner Pod as an external provisioner by referencing the value of the provisioned. name key in the ConfigMap.

RBAC :

Role-based access control (RBAC) is a method of regulating access to a computer or network resources based on the roles of individual users within your organization.

RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.

Let's write yml file for these three!

# EFS-PROVISONERkind: Deployment
apiVersion: apps/v1
metadata:
name: efs-provisioner
spec:
selector:
matchLabels:
app: efs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:v0.1.0
env:
- name: FILE_SYSTEM_ID
value: fs-622ea5b3
- name: AWS_REGION
value: ap-south-1
- name: PROVISIONER_NAME
value: lw-course/aws-efs
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: fs-622ea5b3.efs.ap-south-1.amazonaws.com
path: /
# RBAC ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nfs-provisioner-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: lwns
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
# STORAGE-CLASSkind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: lw-course/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-wordpress
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-mysql
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi

One step closer to launch our WordPress app✌

→ We have to install amazon-efs-utils in our slave nodes.

Login into our ec2 instances and run command → sudo yum install amazon-efs-utils -y

Now run our final command → kubectl create -f filename.yml -n our namespace

Let's check whether pods are launched ! by running kubectl get all — all

Now give our load balancer IP in google

Let's enter into the exciting part of our Project 👻

# NOW LETS JUMP INTO OUR PROMETHEUS — GRAFANA INTEGRATION BY USING HELM (●'◡'●)!

Let's get an overview of the helm, Prometheus, grafana

PROMETHEUS :

Prometheus is An open-source monitoring system with a dimensional data model, flexible query language, efficient time-series database, and modern alerting approach. Prometheus collects the metric data from the exporter's program and saves in the time-series database. The exporter is the program that we install on the nodes, exposes the metrics. ( metrics ex:- ram, CPU utilization comes under metric data).

Prometheus stores 3 types of data in its database.

  1. Timestamps 2. Labels 3. Metric Data

GRAFANA :

Grafana is multi-platform open-source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources. It is expandable through a plug-in system. End users can create complex monitoring dashboards[4] using interactive query builders.

HELM :

Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application

Now let's start hurray🥳

First thing is to install the helm and triller!

Then its time to initialize the helm

let's run some commands

# helm init

# helm repo add stable https://kubernetes-charts.storage.googleapis.com/

# helm repo list

# helm repo update

# kubectl -n kube-system create serviceaccount tiller

# kubectl create clusterrolebinding tiller — clusterrole cluster-admin — serviceaccount=kube-system:tiller

# helm init — service-account tiller

# kubectl get pods — namespace kube-system

Now lets run Prometheus commands:

# kubectl create namespace prometheus

# helm install stable/prometheus --namespace prometheus --set alertmanager.persistentVolume.storageClass="gp2" --set server.persistentVolume.storageClass="gp2"

# kubectl get svc -n prometheus

# kubectl -n prometheus port-forward svc/flailing-buffalo-prometheus-server 8888:80

Lets install grafana through helm command:

# kubectl create namespace grafana

# helm install stable/grafana — namespace grafana — set persistence.storageClassName=”gp2" — set adminPassword=’GrafanaAdm!n’ — set datasources.”datasources\.yaml”.apiVersion=1 — set datasources.”datasources\.yaml”.datasources[0].name=Prometheus — set datasources.”datasources\.yaml”.datasources[0].type=prometheus — set datasources.”datasources\.yaml”.datasources[0].url=http://prometheus-server.prometheus.svc.cluster.local — set datasources.”datasources\.yaml”.datasources[0].access=proxy — set datasources.”datasources\.yaml”.datasources[0].isDefault=true — set service.type=LoadBalancer

# kubectl get secret worn-bronco-grafana — namespace grafana -o yaml

Now give the IP of load balancer in google

Thank you for reading!

--

--