Minikube Instalation
Minikube Instalation
Minikube Instalation
Create a Dockerfile
docker build -t three-tier-frontend .
docker images
Now run the image to make it container.
docker run -d -p 3000:3000 three-tier-frontend:latest
Also open the port 3000 from inbound rule. Check with the ipv4 address and port
3000. If the application is running that
means your configuration was perfect.
Now, you must install AWS CLI V2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin --update
aws configure
Also create an IAM Configuration
Create a user eks-admin with AdministratorAccess.
Generate Security Credentials: Access Key and Secret Access Key.
Pre-requisites -
Ubuntu OS
sudo privileges
Internet access
Virtualization support enabled (Check with egrep -c '(vmx|svm)' /proc/cpuinfo,
0=disabled 1=enabled)
And you will get all the components of Kubernetes listed out. Cluster is ready.
Now observing Database, after carefully observing all the three yaml files we can
say that they are interdependable and need to deploy. The function of
deployment.yaml is to run the containers of MongoDB, secrets.yaml to provide
username and password required to run mongodb and service.yaml let mongodb to
access all the applications. Lets do the deployment of Backend. Now create a
namespace first kubectl create namespace three-tier. Now deploy all the three
files inside Database folder.
kubectl apply -f deployment.yaml
kubectl apply -f secrets.yaml
kubectl apply -f service.yaml
mongodb-svc created
If no type is assigned to service then by
default it assigns ‘ClusterIP’
Now go to Kubernetes-Manifest-file/Backend
and open deployment.yaml file.
Isme under spec: ; type: RollingUpdate h. So
a rolling update is a method used to update a
deployed application or service in a way that
ensures continuous availability and minimal
disruption to users. In containerized
environments, rolling updates are often used
to update applications running in Pods.
As we go down under spec: ; image:
we need to change this image to ecr backend
container path.
Now just deploy the deployment file of backup and check the logs.
kubectl apply -f backend/deployment.yaml
kubectl logs api-64fb8488f8-xq4mk -n three-tier
Explain the deployment.yaml file and service.yaml file and deploy it.
kubectl apply -f Frontend/deployment.yaml
kubectl apply -f Frontend/service.yaml
Now, doing kubectl get pods with namespace gives us all the running pods.
kubectl get pods -n three-tier
To access all this three running pods its important for us to run ingress.yaml and
for that you will need ingress controller. But thats a tricky part in itself. We will
need some manifest files for this that no one writes. A package naem HELM is used
in order to install ingress controller. To install helm we have to install load
balancer. Step 9 has all the commands.
curl -O https://raw.githubusercontent.
com/kubernetes-sigs/aws-load-balancer
-controller/v2.5.4/docs/install/iam_
policy.json
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicyForEKS
--policy-document file://iam_policy.json
creating a policy cluster and load balancer will get connectivity
eksctl utils associate-iam-oidc-provider --region=us-west-2 --cluster=three-tier-
cluster --approve
Kind of gate or lock and key to approve the load balancer
eksctl create iamserviceaccount --cluster=three-tier-cluster --namespace=kube-
system --name=aws-load-balancer-controller --role-name
AmazonEKSLoadBalancerControllerRoleForEKS --attach-policy-
arn=arn:aws:iam::626072240565:policy/AWSLoadBalancerControllerIAMPolicyFor
Eks --approve --region=us-west-2
By this all the service will get access to cluster.
kubectl get nodes will show the two nodes. Also do kubectl top node. This will show
same two nodes with cpu and memory usage.
Now go to kubernetes_manifest_yml_files-main folder > 05-HPA
Cat 01_Deployment.yml
cat 02_Service.yml
cat 03_HPA.yml
This will increase the load pretending traffic. on duplicate cmd just try this
command. kubectl get pods and kubectl get hpa.
Now break the load by ctrl-c and the load in % will get decrease gradually and
corrosponding pods will also decrease.