Kubernetes Practicals Ebook
Kubernetes Practicals Ebook
Kubernetes Practicals Ebook
J°t Edition
CONTENTS
1. Introduction to Kubernetes 03
2. Key definitions and concepts 03
1. What is Kubernetes? 03
2. What therefore is Containerization 03
3. What is a docker container? 04
4. How Kubernetes differs from Docker project? 05
5. What is orchestration? 05
6. Features of orchestration 05
7. Key Features of Kubernetes 06
8. Work Units of Kubernetes 07
9. Components of Kubernetes 09
3. Kubernetes Concepts 13
1. Pods 14
2 Controllers 17
4. Deploying Kubernetes Manually 20
1. Install Docker Engine on Ubuntu 32
2 Installing etcd 2.0 on Ubuntu 35
3 Installing Addons 35
5. Downloading Kubernetes Docker Images 41
1 Setting up Kubernetes Cluster 41
2 Dockerizing the App 46
3 Writing Kubernetes Manifest Files for Sample App 52
4 Understanding Kubectl Utility 58
5 Launching and Running Container pods with
Kubernetes 61
LinOxide 1
6 Kubernetes - App Deployment Flow 64
7 Kubernetes – Auto scaling 66
8 Destroying Kubernetes Cluster and Pods 71
6. Deploying Kubernetes with Ansible 72
7. Provisioning Storage in Kubernetes 80
1 Kubernetes Persistent Volumes 81
2 Requesting storage 83
3 Using Claim as a Volume 84
4 Kubernetes and NFS 85
5 Kubernetes and iSCSI 87
8. Troubleshooting Kubernetes and Systemd Services 88
1 Kubernetes Troubleshooting Commands 88
2 Networking Constraints 98
3 Inspecting and Debugging Kubernetes 98
4 Querying the State of Kubernetes 101
5 Checking Kubernetesyaml or json Files 106
6 Deleting Kubernetes Components 107
9. Kubernetes Maintenance 109
1 Monitoring Kubernetes Cluster 109
2 Managing Kubernetes with Dashboard 119
3 Logging Kubernetes Cluster 126
4 Upgrading Kubernetes 129
he two technologies play a key role in shifting to DevOps methodologies and CI/CD (continuous integration/co
LinOxide 5
HOW KUBERNETES DIFFERS FROM DOCKER PROJECT?
WHAT IS ORCHESTRATION?
FEATURES OF ORCHESTRATION
After building the container image you want with Docker, you can
use Kubernetes or others to automate deployment on one or more
compute nodes in the cluster. In Kubernetes, interconnections
between a set of containers are managed by defining Kubernetes
services. As demand for individual containers increases or decreases,
Kubernetes can start more or stop some container pods as needed
using its feature called replication controller.
• Extensibility
This is the ability of a tool to allow an extension of its
capacity/capabilities without serious infrastructure changes. Users
can freely extend and add services. This means users can easily
add their own features such as security updates, conduct server
hardening or other custom features.
• Portability
In its broadest sense, this means, the ability of an application to
be moved from one machine to the other. This means package can
run anywhere. Additionally, you could be running your application on
google cloud computer and later along the way get interested in
using IBM watson services or you use a cluster of raspberry PI in
your backyard. The application-centric nature of Kubernetes allows
you to package your app once and enjoy seamless migration from
one platform to the other.
• Self-healing
Kubernetes offers application resilience through operations it initiates
such as auto start, useful when an app crash, auto-replication of
containers and scales automatically depending on traffic. Through
service discovery, Kubernetes can learn the health of application
process by evaluating the main process and exit codes among others.
Kubernetes healing property allows it to respond effectively.
• Load balancing
Kubernetes optimizes the tasks on demand by making them
available and avoids undue strain on the resources. In the context of
Kubernetes, we have two types of Load balancers – Internal and
external load balancer.
The creation of a load balancer is asynchronous process,
information about provisioned load balancer is published in the
Service’s status. loadBalancer.
Traffic coming from the external load balancer is directed at the
backend pods. In most cases, external load balancer is created with
user-specified load balancer IP address. If no IP address is specified,
an ephemeral IP will be assigned to the load balancer.
• Cluster
These are the nodes or the collection of virtual machines or
bare- metal servers which provide the resources that Kubernetes
uses to run applications.
• Pods
Pods are the smallest units of Kubernetes. A pod can be a single or a
group of containers that work together. Generally, pods are
relatively tightly coupled. A canonical example is pulling and serving
some files as shown in the picture below.
It doesn’t make sense to pull the files if you’re not serving them and it
doesn’t make sense to serve them if you haven’t pulled them.
Application containers in a pod are in an isolated environment
with resource constraints. They all share network space, volumes,
cgroups and Linux namespaces. All containers within a pod share an
IP address and port space, hence they can find each other via
127.0.0.1 (localhost). They can as well communicate with each
other via standard inter-process communications, e.g. SystemV
semaphores/POSIX shared memory. Since they are co-located, they are
always scheduled together.
• Labels
• Services
• Replication Controller
A Replication Controller ensures that a specified number of pod
replicas are running at any one time. It defines pods that are to
be scaled horizontally. Pods that have been completely defined are
provided as templates which are then added with what the new
replication should have. It is the responsibility of Replication
controller to make sure that a pod or a homogeneous set of pods
is always up and available.
COMPONENTS OF KUBERNETES
Kubernetes cluster(s)
• Epearcfhorcmoms
manaonpdetrhaattioynouonruonnewoitrhmkuorbe
ectl resources.
• Epoxadms,
psleersviocferse,seonudrpcoeitnytpsees.ta.cre jobs,
nodes,
Etcd
• T tih
sdva oslirtr
useeib
isstshauoahte
rerig yt’asvuasieladbtloe
thakey-
ded
• Isttadtoeetshtrhoeugwhathheiunsginofgcalpuiss
teerrv’sesrhared component/service.
• Idteesnirseudrestsactleuster is operating within
the
• K
hpo ud
dioseibss
ne-tclshif
luce
edhc ydan
mees cudle
ael.gep
Terlomyemnet notf of
kube-scheduler
worker node.
• Kcluubstee-rp.roxy usually runs on each node in
the
• IEtnwdaptocihnets
tahdedmitiaosnt/erremforovSaelravnicdedaoneds load
bfboarlwanacrnidninggtharnodugrohusnimd-
prolebtUihnDaPc,riToeCsnsPastsreetaomf
anaycktheindg saebrovuicteKsuwbiethrnoeuttes
oercSlervtisceksnorwing
Pods.
• Kimupblee-
mpreonxtyinisg aalsforrmesopfovnisritbulaelfIoPr for
services of types other than ExternalName.
• T eahcihs inon
running sotdhe ipnritmhearcylunsotedre agent
Kubelet
• IJtSOgeNtsfothrme caot
nfrfiogmurtahteioanpoisfearvpeordainndYeAnMsuLr
/e
tchoantfitghuercaotinotnasinaerresrduensncirnibgea
dnidnitnhhoseealthy state.
• Ictredaoteesdn’ot umtsaindaegKeucboenrtnaeinteesrs
which were
Supervisord • Saitnucsipltuie
ows oesrrn
evat/
risnsotorcm
sder voisenr tsryosltaemnutmhabtearlloof
• Iatcdtsoaes
rduonwtnimloeaedninvgironcmoenntat ifnoerrciomnatag
iensearns d
Table 2: Kubernetes Node Components
Kubernetes Concepts
Pods
Controllers
Kubernetes Concepts
To fully understand Kubernetes operations, you’ll need a good
foundation on the basics of pods and controllers. We’ll refer to the
diagram below while explaining these concepts.
LinOxide 13
Pods
I
n Kubernetes, a Pod is the smallest deployable object. It is the
smallest building unit representing a running process on your
cluster. A Pod can run a single container or multiple containers
that need to run together.
• One container per pod - This is the most common model used
in Kubernetes. In this case, a pod is a wrapper around a single
container. Kubernetes then manage pods instead of directly
interacting with individual containers.
LinOxide 1
Sidecar containers
From this diagram, the sidebar container does pulling of updates from git
and application controller then serve these files on application server.
Ambassador containers:
Adapter containers:
s in a Pod access shared volumes and all data in those volumes. In this way, if one container in a Pod is destro
Networking
Pods Lifetime
Controllers
T
he major controller component is ReplicationController which work
to ensure that a specified number of pod replicas are
running at any one time. It makes sure that a pod or a
homogeneous set of pods is always up and available.
akpiniVde:
rRseiopnli:cva1tionControll
er mneatmadea: tcaa:ddy
spreepcl:icas: 4
selector:
app: caddy
template:
metadata:
name: caddy
labels:
app: caddy
spec:
containers:
- name: caddy
image: caddy
ports:
- containerPort: 80
From above code snippet, you can see that we have specified that
four copies of caddy web server be created. Container image to
be used is caddy and port exposed on the container is 80
Give it some seconds to pull image and create container, then check
for status:
$ kubectl describe replicationcontrollers/caddy
-P-o- ds Status:4 Running / 0 Waiting / 0 Succeeded / 0 Failed
If you would like to list all the pods that belong to the
ReplicationController in a machine readable form, run:
$--pooudtps=u`tk=ujsboencptal tghe=t{p.iotedms -s-.s.emleecttaodra=tap.npa=mngei}n` x\
$ echo $pods
Rescheduling
Scaling
You can easily scale the number of replicas up and down using
auto- scaling control agent or through manual process. The only
change required is on the number of replicas. Please note that
Horizontal Pod Autoscaling does not apply to objects that can’t be
scaled, for example, DaemonSet.
Rolling updates
In our Lab, we’ll setup one Kubernetes master and two Kubernetes
nodes. This Lab is done on VirtualBox. These three virtual machines will
be created using vagrant. Vagrant is a software applications available
on Windows, Linux and Mac which allows you to easily build and
maintain portable virtual software development environments.
Prerequisites:
1. Install VirtualBox
2. Install Vagrant
3. Spin up three VMs
Install VirtualBox:
LinOxide 2
#coencthroibd>eb/ehtct/tapp:/t//dsourncleosa.dli.svti.rdt/uvairlbtuoaxl.boorgx/.vliisrttualbox/
#| swudgeota-pqth-ktetpysa:/d/wd w- w.virtualbox.org/download/oracle_vbox_2016
#apwt-gkeety-qadhdtt-ps:/w-O- | sudo
Install Vagrant
mkdir kubernetes_lab
$ cd kubernetes_lab
vim Vagrantfile
If you don’t have ssh key, you can generate using command below:
LinOxide 2
$Gessnhe-rkaetiynggenpublic/private rsa key pair.
Enter fipalessinphwrhasiceh(etmo
spatvyefothrenkoepya(s/shpohmraes/jem):utai/.ssh/id_rsa): id_rsa
EYonuterridsaemnteifipcaastsiopnhrhaasse baegeanins:aved in id_rsa.
YTohuerkpeuybfilnicgkeerpyrhinast ibse: en saved in id_rsa.pub.
SdHevA.j2m56ta:8i.Fc2oOmbfrwvIa4/rn3oCHjnx5FgEsxVH/MJP1pf17
mgt4 jmutai@
T+-h--e[RkSeAy's2r0a4n8d]-o--m-+art image is:
| .o+.+o = ||
| .o. ... +* .+o ||
| S +o = =. |.|
| .o.o.=.@...++oo. |o|
|+----[.S+H=OA=2*5o6E]-.-|---+
directory. Now that you have ssh keys that we’ll use to ssh to the
VMs, it is time to
write Vagrantfile used to automatically bring the three VMs up.
Vagrantfile uses ruby programming language syntax to define
parameters. Below is a sample Vagrantfile contents used for this
Lab.
conlfiguarerant configur
# cboanckfiwguareds
tchoemcpoantfiibgiulirtayt)i.oPnlevaesresidoonn('twcehsaunpgpeoirttu
onlldeesrs sytoyuleksnfow w# hyoaut 're doing.
Vcaognrafingt.v.cmon.bfiogxu=re"(u"b2"u)ndtou/|xceonifiagl6| 4"
cwoenbfi.vgm.vm.n.edtewfionrek ""kpuubbelircn_entet
w-morakst",eirp": d"1o9|2w.1e6b8| .60.2"
ewndeb.vm.hostname="kubernetes-master"
cwonefib.gv.mvm.n.deetwfinorek"k"puubbelrinc_enteest
-wnordke"-,0i1p": d"1o9|2w.1e6b8| .60.3"
ewndeb.vm.hostname="kubernetes-node-01"
cwonefib.gv.mvm.n.deetwfinorek"k"puubbelrinc_enteest
-wnordke"-,0i2p": d"1o9|2w.1e6b8| .60.4"
web.vm.hostname="kubernetes-node-02"
end
en
d
Once you have the file saved as Vagrantfile. Now create virtual
machines using from the Vagrantfile. Note that you need to be on
same directory as Vagrantfile before running the command shown
below:
date...
=k=u>bekrunbeetrens-emteass-mtera_s1te5r0:9S8e1t9t1in5g76t8h2e_n2a7m2 e of
the VM: kubernetes_lab_
=in=t>erkfuabcesr.n..etes-master: Clearing any previously set network
=co=n> fikguubreartnioente..s.-master: Preparing network
interfaces based on kubernetes-master: Adapter 12:
nbraitdged
==k>ukbuebrenrenteste-ms-amsatesrte: r2:2F(ogruweastr)d=in>g2p22o2rt(sh..o.st)
(adapter 1)
==> kubernetes-master: RBouontnininggV'pMr.e.-.boot' VM customizations...
=fe=w> kmuibneurtnese.t.e.s-master: Waiting for machine to boot.
This may take a kubernetes-master: SSH audsedrrneassm:
e1:27u.b0u.0n.t1u:2222
kubernetes-master: SSH auth method: password
kubernetes-master: RInesmerotviningggiennseercautreedkpeuybflriocmketyhwe
igtuheinstgiuf eits'st...
present...
kubernetes-master: Key inserted! Disconnecting and reconnecting
using new SSH
==>
key...kubernetes-master: Machine booted and ready!
== k>ukbuebrenrenteste-ms-amsatesrte: rT:hCehgeuceksint
agdfdoirtigounesstoandtdhiitsioVnMs idnoVnMot..m.
atch
thkeuinbsetranlleetdesv-emrasisotenro: fVirtualBox! In most cases
this is fine, but in rakreubcaesrenseittesc-amnaster: prevent things
such as shared folders from wokrukbinergnpertoeps-
emrlay.stIefry:osuhasered folder errors, please make sure the
gukeustbaedrndeittieosn-ms wasiterin: vtihretual machine match
the version of
atch
thkeuinbsetranlleetdesv-enrosdioen-0o2:f VirtualBox! In most cases
this is fine, but in rakreubcaesrenseittesc-annode-02: prevent things
such as shared folders from wokrukbinergnpertoeps-enrolyd.eI-
f0y2o: ushsaereed folder errors, please make sure the
gukeustbaedrndeittieosn-ns owdieth-0in2:tvhiertual machine match
the version of VikrtuubaelBrnoexteyos-unhodavee-0i2n:sytoaluler
dhosnt and reload your VM.
$ vagrant status
Current machine states:
kubernetes-mnoadsete-0r1
rruunnnniinngg((vviirrttuuaallbbooxx))
Now ssh to the Kubernetes master node and update apt cache, then
do system upgrade:
sudoapt-get
su update && apt-get
- upgrade && apt-get dist-upgrade
upgrade
$ saupdt-otraapnts-gpeotritn-hsttaplls\\
cu-rcle\rtificates \
software-properties-common
$Cldieonckt:er version
VAePrIsvioenrs:ion1:
71.1.303.0-ce
gfo41ff.d82.35
Git
vceormsimo
nit::
BOuSi/Al
tr: ch: TuleinOucxt/1a7m1d96:404:16 2017
SVeerrvseiro:n: 17.10.0-ce
AGPoIvveerrssioionn: : g1o.313.8(.m3
inimum version 1.12) GBuitilcto: mmTitu:
e fO4ffctd127519:02:56 2017
OExSp/Aercimh:entaliln: ufaxl/saemd64
Install dependencies:
apt-get install -y transport-https
Once you have all the master components installed, the next step
is initialize cluster on the master node. The master is the machine
where the “control plane” components run, including etcd (the cluster
database) and the API server (which the kubectl CLI communicates
with). All of these components run in pods started by kubelet.
This will download and install the cluster database and “control
plane” components. This may take several minutes depending on
your internet connection speed. The output from this command will
give you the exact command you need to join the nodes to the
master, take note of this command:
.Y.o. ur Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
msukddoircp-p-i$/HetOcM/kEu/b.keurnbetes/admin.conf $HOME/.kube/config
as root:
ckouvebreya-dtomkejoni-nca--ctoekrte-
nha4s5h3as5hea.235d62:f5e3d6efe7e27f85c76cdb4082d1701.103.25.81f5d:76f41493c-
e -cd-is-
c0b903824a90bf8b3470ef1f7bd34f94892
msukddoircp-p-i$/HetOcM/kEu/b.keurnbetes/admin.conf $HOME/.kube/config
$cokvuebreya-tdomkejno-icna-c-teorkte-hna4s5h3sah5ae2.35d62:5f3e6dfe7e2ef785c76cdb408
csu0bd9o0c3p82-i4/ae9t0c/bkfu8be4rn70eetef1s/fa7dbmd3i4nf.9c4o8n9f2$HOM
There are two ways to install etcd on Ubuntu – One being building it
form source, next being using pre-built binary available for
download. If you’re interested in getting the latest release, consider
building it from source.
To build it from source, you’ll need installed git and golang, then run:
$ gcdituectlcodne https://github.com/coreos/etcd.git
$ ./binil/detcd
Installing Addons
Here I’ll show a number of plugins that you can install to extend
Kubernetes functionalities. This list is not exclusive, feel free to add
what you feel might help.
noneklu/mbeacsttlearp/Dpolycu-fmhetntptast:i//orna/wk.ugbiteh-uflbaunsneerlc.yomntlent.com/coreos/
neklu/mbeacsttlearp/Dpolycu-fmhettnptsa:t/i/oranw/k.guibtheu-flbaunsneercl-ornbtaecn.yt.mcolm/coreo
$ ./buinil/detcd
For further reading about Flannel, refer to https://github.com/coreos/
flannel
Deploy CoreDNS
2. Verify Installation
#gogovevresriosinongo1.6.2 linux/amd64
mexkpdoirrt~G/gOoPATH=$HOME/go
export PATH=$GOPATH/bin:$PATH
awpgte-gt ehtttipns:t/a/gllitwhguebt.com/coredns/coredns/releases/download/v0.9.10/
aorrezxdvnfs_c0o.r9e.d1n0_s_li0n.u9.x1_0a_mlindu64x._tagmz d64.tgz
p coredns /usr/local/bin/
Test that the binary is copied and working by checking coredns version:
#Co/uresDr/NloSc-a0l./9b.i1n0/coredns -version
For more information about CoreDNS, visit official project page on github:
atrlda/pmpalyst-efrh/sttrpc/sd:/e/rpalwoy.g/ritehcuobmumseerncdoendte/knut.bceormn/ekteusb-edran
yaml
Before you can start using Kubernetes Dashboard, you’ll need to
configure the proxy server using kubectl command. This will set url for
you which you can use to access the dashboard. Run command
below to get proxy configured:
$ kubectl proxy
#Usdioncgkdeerfpaullt btaugs:ylbaotexst
9S9igce9st7:6df0a32556d: 24f03a03235220b170ba48a157dd097dd1379299370e1ed-
ruabssye2n
4g
phasussein
latest c3f873600e95 5 months ago 640MB
onge/ r-
ruby24
As you can see above, the image has been downloaded successfully.
We’ll use this image in the next section.
Testing:
Now that we have ready Kubernetes cluster, let’s create a simple pod
on this cluster. As an example, consider below simple Pod template
manifest for a Pod with a container to print a message. Pod
configuration file is defined using YAML syntax:
$ cat pod.yaml
akpiniVde:
rPsoiodn: v1
mneatmadea: ttae:st
app-pod
laabpepl:s:testapp
spec:
containers:
- name: testapp-container
$NkAuMbeEctl getRpEoAdDsYSTATUS
RESTARTSAGE
testapp-pod0/1ContainerCreating0 6s
$NkAuMbeEctl getRpEoAdDsYSTATUS
RESTARTSAGE
testapp-pod1/1Running 0 6s
5. Downloading Kubernetes Docker Images
LinOxide 4
Install a Hypervisor:
Download VirtualBox:
$5.c2u_r5l.2-L.0O-1h1t8t4p3:/1/d~Uowbunnlotaud~.xveirntiuaal_lbaomxd.o6r4g.d/veibrtualb
Install Kubectl:
Kubectl is the command line utility which interacts with API Server
of the Kubernetes.
LinOxide 4
Make binary executable:
$ chmod +x ./kubectl
https://github.com/kubernetes/kubernetes.
B carpseeoia
and tCeeose
xcsp ommitC
Ta rnke
as ed
aaste(Baergeisnonuerrc)e: by filename or stdinex
anerewpKliucbteiornetes ServiceR pa t m ge
on the cluster a controller, service, deployment or pod run-
containuenSet
... r aRun r aicpualratriciular image on the clusterset
specific on
features
objects
Install Minikube:
als-eLso/vm0.i2n2i.k3u/mbeinhitktupsb:e/-slitnouraxg-aem.godo6g4l&ea&pcish.mcoomd/+mxinikube/
Verify Installation:
Aadvdaiolnabsle
ComMmodainfydsm:
inikube'skubernetesaddons
shcoemll (pbleatsiho)n Outputs minikube shell completion for the given
cdoansfihgboard ModOifpyemnsi/ndiiksupblaeycsotnhfiegkubernetes
cluster
dmoacckheirn Sets up dockerenv variables; similar to '$(docker-
-e evnv)'
available for
ip Retrieves the IP address of the running cluster
minikube
The message indicates that Kubernetes cluster is started with Minikube.
Docker should be running on the host machine. Minikube will
use default container engine (docker here) to run the app.
Create Kubernetes Cluster through Minikube:
SGteatrttiinnggVVMMI..P. address...
MSeottviinnggufiplecseirntsto...cluster...
CSeotntinnegcutipngkutobeclcuosntefirg....
It will list the clusters. You should get the result like:
$NkAuMbeEctlconfig get-clusters
minikube
a--pspe/rver.js
-- package.json
server.js:
arepsp.s.geentd('/(',HfuelnloctWioonr(lrde!'q);, res) {
});
});
Package.json:
{ "nvearmsieo"n: "":h"e1l.l0o.-0w",orld",
st"adretsecrr/ihpetliloon-"w: o"Hrledl.lhotmwol"r,ld app taken from:
https://expressjs.com/en/ "smcariipnt"s:"":s{erver.js",
},"stetasrt"t:":"","
"retyppoes"i:to"gryit"":, {
},"url": "git+https://github.com/borderguru/hello-world.git"
"aiuctehnosre"":: """I,SC",
"b"uurgls"": :"{https://github.com/borderguru/hello-
world/issues"
}, omepage": "https://github.com/borderguru/hello-world#readme",
"dcehpaein":d"e^n4c.i1e.2s"",: {
"emxopcrhesas"": :""^^44.0.1.15".,3",
} "request": "^2.83.0"
}
To Dockerize the app, we need to create the
Dockerfile: Dockerfile:
#WCOrReKatDeIaRp/pudsri/rsercct/oarp
NreOqDuEir_eEdNeVnvpirroodnumcetin
otnvariables
# [email protected],pcaocpkyagpea-
To create the images from the Dockerfiledockercli is used. Now the app
structure is:
app/ rver.js
--- peackkaegrefi.ljeson
Doc
It will build the image and tag it with helloworld:1.0 here 1.0 is the
version of
the image. If nothing is specified then, the latest version will be chosen.
This will download all dependencies to run the app. After a successful
build, check the images.
sudodocker images
REPOSITORY TAG IMAGE ID CREATED SIZE
helloworld 1.0 c812ebca7a95 Abo a 678
m11induaytes
node boron c0cea7b613ca 661 MB
ago
To run the container docker run command is used. It’s necessary to
bind the node port to container port.
sudodocker run -it -p 3000:3000 helloworld:1.0
LinOxide 50
Tagging image to latest to indicate this will be the most recent
version of the image. Dockerhub is the central registry for storing
docker images. However, there are many other registries available
like JFROG, Quay and Amazon ECR.
Login to dockerhub:
(Notice: If you don't know dockerhub then please visithttps://hub.docker.
com and create account)
dUoscekrnera.mcoem(ktuobcerjeaactke)o: knue.bejack
PLoasgsinwoSrudc:ceeded
$sudodocker tpaugsh
klulobweojarcldk:/lhaetellsotwkuobrledja:lcakte/hstelloworld:latest
T50hce9pa7udshd8r3ebfe6r:sPtuosaheredpository
[docker.io/kubejack/helloworld] 9e47d15f58a2e6a44a9b8594::
PPuusshheedd
574c9e696eb49c979c19747e: PMuosuhnetded
from library/node
c73d0f23510ce1b319aea5c: Mounted from
library/node ed97521aa06331b0e7e4:
Mounted from library/node
d5d6640efdcc3e443b059b: :
MMoouunnteteddfrfroommlilbibrraarryy/n/no
oddee
cla0t1ecs6t:3dc6ig8e2s3td: :
f72a1f49901c2bcde74250 size:
shMao2u5n6:t9e5d2fffro7em89li5b4r7aer1y5/n76o3d0ebe120126a3e1d8717d45e0d-
2838
LinOxide 5
successfully containerized now and images are pushed to
the dockerhub.
LinOxide 5
Writing Kubernetes Manifest Files for Sample App:
For now, the sample app is containerized but to run the app
over Kubernetes it will require the Kubernetes Objects.
mkdirkubernetes
Create deployment.yml and service.yml files
ctodukcuhbdeerpnleotyems ent.yml
touch service.yml
k--u- bDaocnkeetrefisl/e
--- dseerpvliocyem.yemnlt.yml
Kubernetes manifests are plain yaml files which will define the desired
state of the cluster.
deployment.yml
akpiniVde: rDseiopnlo:
yemxtentsions/v1beta1me
natmadea: thae:
llo-worldla :
abpepl:shello-worldver:
specv: 1r p
as 10
selelcictor:matc Labe s:
aphp: hello-
world
ver: v1
temmeptaladtaet:a:
laabpepl:s:hello-world
ver
spc: oevc1:tainers:
- inmnaamgee:: hkeulbloe-jwacokr/lhdelloworld:latest
containerPort: 3000
deployment.yml:
nlaatmadea::thae: llo-world
v abp: epl:shello-world
er v1
deployment.yml
sprseeplcl:ictoasr: 10
maatecpchpL: ahbeellos:-world
vte : vp1late:
mlaebtaedlsa:ta:
deployment.yml
spc- oenc:tmaien:ehresl:lo-world
imaginmeaauglelP: okluicbye:jaAclkw/ahyeslloworld:latest
p- oPrtst:ainerPort: 3000
con
imagePullPolicy indicates when the image should get pulled and can
be “Always” to make sure updated image is pulled each time.
akmpineiVde: rSseirovni:cve1
nlaatmadea:thae: llo-world-svc
s nbacemlse:: hello-world-svc
p-opr:tosr:t: 80
targpertoPtorcto:l3: 0T0C0P
sealpepc:tohre:llo-world
ver: v1
In the spec, it’s possible to expose the port of the node. The exposed
port will be mapped to the container port. Here targetPort is the
port of the container which is mapped with node port 80.TCP and
UDP protocols are supported.
The most important is Selector. It will select the group of pods which
will match all labels. Even if any pod consist more than, specified
labels then it should match. It is not the same case with the selector,
if there is any label missed in selector but not in the pod then
service will just ignore it.
Labels and Selectors are good ways to maintain version and to
rollback and rollout the updates.
Understanding the Kubectl Utility
Everything can be done using API Server only. There are different
options for calling REST API’s, the User interface of the Kubernetes and
Kubectl.
$apciaVte~r/s.ikounb:ev/1config
c- lculustsetresr::c i ate-authority:
servt efirc:
/home/ut/.minikube/ca.crt nam
https://192.168.99.100:8443
conteeinikube
x:t
sm:-
contex :
clustetr: minikubeuser: e
name: minikubecurrent-
context: minikube kind:
Configpr ferences: {}
Kubectl can,
Create the Kubernetes Object:
All these are few examples of the Kubectl utility. But there are
more advanced use cases like scaling with kubectl. All these are
imperative methods.
Get deployment:
kNuAbMecEtl get rs
DESIRED CURRENTUDAPT-TEO- AVAILABLEAGE
hweolrlold--49362160110 10 10 10 1h
$NkAuMbeEctl get po
READYSTATUSRESTARTSAGE
hello-world-493621601-xvk08h6hds 1/1
hello-world-493621601-ztp321/1
Again the more hash values are added over the replica set. The
status indicates all pods are in running state. That means desired
state of the cluster is met.
Get the service object and describe it:
Describe service:
Laabmelss:pace: neamaue=lthello-world-svc
Annotations: kubectl.kubernetes.io/last-applied-
ctaotniofingsu"r:{a}t,"iolanb=e{l"sa"p:{i"Vnearmsioe"n:"h"evl1l"o,"-kwionrdl"d:"-
sSvecr"v}i,"cnea","mmee"t:"ahdealtlao"-:w{"oarnlndo-
sSvecle",
espacea"p:"pd=ehfaeulllot-"w},"
c"ntoarm
osr..l.d,ver=v1
:
10.0.0L.o13a1dBalancer
TIPy:pe:
E0 n+d7points: 172.17.0.10:3000,172.17.0.12:3000,172.17.0.2:300
mSeos
rseio..n
.
Affinity: None
Events: <none>
Endpoints are unique IP’s of the Pods. Services group all Pods with the
IP’s. Kubernetes Services plays the role of Proxy.
To access the app it’s necessary to get the IP and port from the
Minikube.
$minikube service hello-world-svc
Scaling up:
deployment.yml
akpiniVde: rDseiopnlo:
yemxtentsions/v1beta1me
natmadea: thae:
llo-worldla :
abpepl:shello-worldver:
specv: 1r p
as 20
selelcictor:matc Labe s:
aphp: hello-
world
ver:
v1template:metadata:
labels:app: hello-world
ver: v1
spec:
containers:
- name: hello-world
image: kubejack/helloworld:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
Update the deployment with,
$NkAuMbeEctl get
pods world-493621601-
h1ge7ll6os-world-
1R/E1ADY RSTuAnTnUinSg 0RESTARTS
493621601- h2deqll3oc- 2AhGE
hg5esllflo-world-
1/1 Running 0 2h
493621601- hj3ejjltlo-
world-493621601-
1/1 Running 0 38s
hj3erlrljo-world-
493621601- hkgelj7loz-
1/1 Running 0 38s
world-493621601-
hlqevlklo4-world- 1/1 Running 0 38s
493621601- hmerlklotj-
world-493621601- 1/1 Running 0 38s
hnefdll5od-world-
493621601- 1/1 Running 0 38s
hpjeblldon-world-
493621601- hq6exlllog- 1/1 Running 0 38s
world-493621601-
hrbe7llgor-world- 1/1 Running 0 38s
493621601- hs5eqllpo9-
1/1 Running 0 2h
world-493621601-
hshell7lov-world-
1/1 Running 0 2hs
493621601- hv0e8ll6od-
world-493621601-
1/1 Running 0 38s
hxkehllho-sworld-
493621601- hztepl3lo2- 1/1 Running 0 2hs
world-493621601-
1/1 Running 0 2hs
nlaatmadea::thae: llo-world
vearbp: epl:shello-world
spr epcv: 1 as 5
selelcictor:
matchLabels:
LinOxide 68
vera: pvp1: hello-
world
temmeptaladtaet:a:
laabpepl:s:hello-
world vesrp: evc1:
c- onnatmaien: ehresl:lo-world
imagimePauglelP:
okluibcye:jaAclkw/ahyeslloworld:l
atest p- coortnst:ainerPort: 3000
1/1 Terminating 0 2m
cwm1j
LinOxide 6
hg5esllflo-world-
493621601- hj3erlrljo- 1/1 Terminating 0 2h
world-493621601-
hkgelj7loz-world- 1/1 Terminating 0 2h
493621601- hlqevlklo4-
1/1 Terminating 0 2m
world-493621601-
hmerlklotj-world-
1/1 Terminating 0 2m
493621601- hnefdll5od-
world-493621601-
1/1 Terminating 0 2m
hpjeblldon-world-
493621601- hq6exlllog- 1/1 Terminating 0 2m
world-493621601-
hrbe7llgor-world- 1/1 Terminating 0 2m
493621601- hs5eqllpo9-
world-493621601- 1/1 Terminating 0 2m
hshell7lov-world-
493621601- hv0e8ll6od- 1/1 Running 0 2h
world-493621601-
hxkehllho-sworld- 1/1 Running 0 2h
493621601- hztepl3lo2-
world-493621601- 1/1 Terminating 0 2m
1/1 Terminating 0 2h
1/1 Terminating 0 2h
1/1 Running 0 2h
keupbloeycmtlednetle"hteel-lfod-ewpolrolydm" deenlte.ytemdl
$sekruvbiceect"lhdeellot-ew-ofrslder-svvic" .dyemlelted
$SmtopinpiiknugbleocstaolpKubernetes cluster...
Machine stopped.
$DmelientinugbloecdaellKetuebernetes cluster...
Machine deleted.
6. Deploying Kubernetes with Ansible
LinOxide 7
Execute below commands to install latest ansible on debian
based distributions.
install ansib
Your machine ssh key must be copied to all the servers part of
your inventory.The firewalls should not be managed and The
target servers must have access to the Internet.
For SSH Authentication between machines. On the source machine,
LinOxide 7
Step 1: Generate Keys
$ ssh-keygen -t rsa
GEnenteerrfiatlienignpwuhbilcich/ptorisvaavte trhsaekey
p(/ahior.me/user4/.ssh/id_rsa): CEnreteartepdadssirpehcrtaosrey
('/ehmopmtey/fuosrenr4o/.pssahs's.phrase):
EYonuterridsaemnteifipcaastsiopnhrhaasse baegeanins:aved in
/home/user4/.ssh/id_rsa. YTohuerkpeuybfilnicgkeerpyrhinast ibse:
en saved in /home/user4/.ssh/id_rsa.pub.
aTdh:e1ek:e1y4's:ar5a:nc do7m7:a2r5t:2im9:a9gf:e75is::ee:4f:a4:8f:f5:
65 user4@server1
+| --[ RS.A.2..0| 48] +
| o=o oo.o+|.|
| So ..o+ o| |
| .o. . *E+||
| ... . +| |
+-----------------+
Note that your key pair is id_rsa and id_rsa.pub files in shown
directories. Your id_rsa is the private key which will reside on the
source machine. id_ rsa.pub is the public key which resides on the
destination machine. When SSH attempt is made from source to
destination, protocol checks these both keys from source and
destination. If they match then the connection will be established
without asking password.
Step 2: Copy Keys
If you are trying from source machine using ssh then use
below commands:
Repeat the copying steps for each Node of the Kubernetes Cluster.
Step 3: Test the Connection
You can also use kubespray without CLI by directory cloning its
git repository. We will use it using CLI. Execute below step to install
kubespray.
$ kubespray -v
$ vi ~/.kubespray/inventory/inventory.cfg
mgeancphrionxey-:0810a8n0sible_ssh_host=192.168.0.144 http_proxy=http://
mgeancphrionxey-:0820a8n0sible_ssh_host=192.168.0.145 http_proxy=http://
mgeancphrionxey-:0830a8n0sible_ssh_host=192.168.0.146 http_proxy=http://
[mkuacbhei-nmea-0st1er]
machine-02
[metaccdh]ine-0 machine-03
[mkuacbhei-neo-d0e2]
machine-03
[kku8bse-c-nluosdter:children]
kube-master
Here the 3 Nodes of the server with the proxy.Let's start the
cluster deployment.
$ kubespray deploy
P19L2A.Y16R8E.0C.1A4P4***:o**k*=*2*7**8***c*h**a*n*g**e*d*=*8*9*****u**n*r*e*a
l1o9c2a.1lh6o8s.0t.1465:ok=324867changed=71803unreachable=
ed=1
$mkaucbheincetl-0g2et nodesReady
4m
machine-03 Ready 4m
List pods in all namespaces by executing below command.
$NAkuMbEeScPtlAgCeEt poNdAsM--aEll-namespaRcEeAsDY STATUS
RESTARTS AGE
1/1 Running 0 5m
kube-system
1/1 Running 0 5m
d7ynks3mnasq- kube-
system
d5vnfshm0jasq-
Running 0 4m
kube-system kapuibserver- 1/1 Running 0 5m
kube-system
mkuabceh-
ine-01 1/1 Running 0 5m
cmoanntrao
glelre-r-
kube-system
mkuabceh-
ipnreo-x0y1- 1/1 Running 0 4m
kube-system
mkuabceh-
ipnreo-x0y2-
kube-system 1/1 Running 0 4m
mkuabceh-
ine-03
1/1 Running 0 5m
smcahcehdi
unle-r0-2
kube-system
kpu8mbekd7ns- 3/3 Running 0 4m
kube-system mnagcihnixn- ep-r0o2xy-
1/1 Running 0 2m
kube-system nmgaicnhxi-
npero-0x3y- 1/1 Running 0 2m
Requesting storage
Using Claim as a Volume
Kubernetes and NFS
Kubernetes and iSCSI
GCEPersistentDisk
AWSElasticBlockStore
AzureFile
AzureDisk
FC (Fibre
Channel)
FlexVolume
Flocker
NFS
iSCSI
RBD (Ceph Block Device)
CephFS
Cinder (OpenStack block storage)
Glusterfs
VsphereVolume
Quobyte Volumes
VMware Photon
Portworx Volumes
ScaleIO Volumes
StorageOS
LinOxide 8
Kubernetes Persistent Volumes
1. PersistentVolume (PV)
2. PersistentVolumeClaim (PVC)
1. Provisioning :
LinOxide 8
There are two types of the Persistent Volume provisioning methods.
Those
LinOxide 8
are Static and Dynamic.In the static method, the administrator creates
the PV’s. But if PV’s does not match with users PVC (Persistent
Volume Claim) then the dynamic method is used. The cluster will try
to generate the PV dynamically for the PVC.
2. Binding:
A control loop on the master watches the PVC’s and binds
matched PV with PVC. If no PV found matching PVC then the
PVC will remain unbounded.
3. Using:
Pods use the Claim as the Volume. Once the PV matches with
required PVC, the cluster inspects the claim to find the bound
volume and mounts the volume for a pod.
4.Reclaiming
The reclaim policy decides what to do with the PersistentVolume
once it has been released. Currently, volumes can be Retained,
Recycled or Deleted.
kapiniVde:
rPseiorsni:stve1ntVolumeClai
m mneatmadea:tma:yclaim
spacece: ssModes:
re-
sRoeuardcWesr:iteOnce
restqoureasgtes:: 8Gi
seoleractgoerC:lassNam
e: slow
mrealtecahsLea:b"setlas:
ble"
• RWO - ReadWriteOnce
• ROX - ReadOnlyMany
• RWX - ReadWriteMany
•
Pods are ephemeral and require the storage. Pod uses the claim for
the volume. The claim must be in the namespace of the Pod.
PersistentVolume is used for the pod which is backing the claim.
The volume is mounted to the host and then into the Pod.
kapiniVde: rPsoiodn:
v1
mneatmadea:tpar:odu
ction-pv
spcoenc:tainers:
- inmaamgee:: fdrocnkterfidle/nginx
v- omluomunetMPaotuhn:
t"s/:var/www/html" volunmamese:: pv
- pnearmsies:tepnvtVolumeClaim:
claimName: storage-pv
PersistentVolume:
akmpineiVde: rPseiorasni:stve1ntVolume
spnatmadea:tnf:s
csaepc:arcaigtey:: 1Mi
a-ctcoeesasdMWorditeesM: any
seFrver: 10.244.1.4
path: "/"
server. PersistentVolumeClaim:
akmpineiVde: rPseiorasni:stve1ntVolumeClaim
spnatmadea:tnf:s
acece: ssModes:
sret-oRroeaagdecWCelraistseNMaamnye: ""
resquestss::
storage: 1Mi
Kubernetes and iSCSI
-a-p- iVersion:
kind:
Pod
metadata:
v1 name:
spec:
c
iscsipd
t
- onnamaien:eisrsc:sipd-
rw ima : keMuounts:-
volugme mo t
th: "/mnt/iscsipd"
bernetes/pause namuen:
i-Psacsipd-rwvolu :
nammee:siscsipd-
rwiscs
lun:
0
fsType: ext4
readOnly: true
8. Troubleshooting Kubernetes and Systemd Services
ng Commands Networking
• Constraints
Kubernetes Querying
• the State of Kubernetes Checking Kubernetes yaml or json Files Deleting Kubernetes Co
•
•
•
•
muibnik-audbdeon-manager- 1/1
kuberdnneste-9s1-d0a3s3h0b66o2a-rddp- 7xt32/2
Running0 35m
1pc46
It will give the status of each pod. Also mentioning the namespace
is al- ways the best practice. If you don’t mention the namespace,
then it will list the pods from the default namespace.
LinOxide 8
$kukbueb-escytsltedmescribe pods kube-dns-910330662-dp7xt
--namespace Name:spkaucbee:-kdunbs-e9-1s0y3st3e0m662-dp7xt
NStoadrte:Tmimine:ikubeS/u1n92, .
11268N.o9v9.210107 17:15:19 +0530 Labels:
kp8osd--atpemp=pkluatbee--
hdanssh=910330662
Ae”n,”napotiVateirosnios:n”:”kvu1b”,e”rrenfeetreesn.icoe/”c:r{e“katined-”b:”yR=e{“
pklicnadS”e:”tS”,e”nriaamlizeesdpRaceefe”r:”eknuc
ba6ea-s9y-s0t8e0m02”,7”n64a6m..e.”:”kube-dns-910330662”,”uid”:”f5509e26-
c79e-11e7-
SIPta:
tus:
sRcuhnendiunlger.alpha.kubernetes.io/critic
al-pod=
Created By1:72.1R7.e0p.3licaSet/kube-dns-
910330662 ContaoinlleerdsBy: ReplicaSet/kube-
dns-910330662
kubedCnosn:tainer ID:
5dcodc4kce8r:1/9/7465d2eb2b5156ec2a4e2a7f385d700eb96c91874137495b9d715
ddf-
Iamdg6e4::1.
gcr.io/google_containers/k8s-dns-kube-dns-
14.4
717d0a47a444581b5bcabc4757bcd79
Ports: 10053/UDP, 10053/TCP, 10055/TCP
c
Args
: main=cluster.local.
--dns-port=10053
config-map=kube-dns
--
v=2
State: Running
LinOxide 89
RSet ardtye:d: STurune, 12 Nov 2017 17:15:20 +0530
ReLsitmarittsC:o
unt: 0
mReeqmueosrtys
:: 170Mi
cpu
m:
e1m00omry: 70Mi
tLiimveenoeusts=:5shpttepr-igoedt=h1t0tps
:#/ s:u1c0c0e5s4s/=h1ea#lftahicluhreec=k5/kubedns delay=60s
#suchctetsps-=g1et#hfattiplu:/r/e:8=0381/readiness delay=3s
Rperaidoid timeout=5s
n=e1s0ss:
EPnRvOirMon
EmTeHnEtU:
S_PORT: 10055
M/koubnets-d: ns-config from kube-dns-config (rw)
vdlg/vsa(rr/or)un/secrets/kubernetes.io/serviceaccount from default-
token- dCnosmntainqe: r ID:
c1a8ad0o8c3kee7r8:9//b8fd3b9fa37c931a8074e566875e04b66055b1a96dcb4f192ac
b-
Iamdg6e4::1. gcr.io/google_containers/k8s-dns-dnsmasq-nanny-
14.4
IdmocakgeerI:/D/s:ha256:f7f45b9cb733af946532240cf7e6cde1278b687c
d7094cf- 0P4o3rbts7:68c800cfd5a3fd/UDP, 53/TCP
Arg-sv:=2
-lcoogntfiosgtDdierr=r/etc/k8s/dns/dnsmasq-nanny
-restartDnsmasq=true
-kcache-size=1000
--log-facilictlyu=s-ter.local/127.0.0.1#10053
--server=/ipn6-
a.adrdpra.a/r1p2a7/.01.207..10#.01.010#5130053
Startete: d: RSuunn,n1i2ngNov 2017 17:15:20
+0530 Reatdayr:t Count: T0 rue
cpum:
Requests:
emor1y5: 20Mi
0m
Ltiimveenoeusts=:5s perihotdt=p1-
g0est#hstutcpc:/e/s:1s0=015#4f/ahieluarlteh=c5heck/dnsmasq
delay=60s EMnovuirnotsn:ment: <none>
/evtacr//kru8sn//dsnecs/rdentss/mkuabsqer-
nentensy.ifor/osemrvkiucebaec-dconus-nctofnrfiomg (drwef)ault-token-
vsdildgesc(arro:)
doCcoknerta:/i/nsehraI2D5:6:38bac66034a6217abfd44b4a8
a763b1a4c-
97I3m04a5gcea: e2763f2gcr.8io5/7gboaoag5lce9_aco87n2tai
ners/k8s-dns-sidecar- aImadg6e4:I1D.:14.4
97304d5occakee2r7:6/ 3sfh2ac2c5865:73b8abaa5cc696a083742a6217abfd44b4a8a7
63b1a4c-
PoArtr:gs: 10054/TCP
--vlo=g2tostderr
pro-b-
e=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.
loca--l.,5,A
pStraotbee:
=dnsmasqR,1u2n7n.0in.0g.1:53,kubernetes.default.svc.cluster.local.,5,
A SRteaardteyd: : STurune, 12 Nov 2017 17:15:19 +0530
Reqtuaerst tCs:ount: 0
cmpeum:
10m
ory: 20Mi
timLievoeunte=s5ss: period=10hstt#ps-ugcect
ehstst=p1://#:1fa0i0lu54re/m=5etrics delay=60s
EnMvioruontms:ent: <none>
vdlg/vsa(rr/or)un/secrets/kubernetes.io/serviceaccount from default-token-
CToynpdeiti
ons:
Status
IRneiatidaylizedTrue
True VPooludmScehse:
duled
True
kuTbyep-de:ns-cCoonnfifigg:Map (a volume populated by a ConfigMap)
NOpamti ktruubee-dns
oen: al:
SecretName: default-token-vdlgs
QoS Class:Burstable
Optional: false
NTooldeera-Stieolencst:ors:Critic<anlAodndeo>nsOnly
EFivresntStse:en LastSeen Count From SubObjectPath
4N0omrmal 40m
Succe1ssfulkMuobuenletVt,omluimniekuMbeountVolume.SetUp
4co0mntainers4{s0i
1 kubelet, minikube spec.
mdecar}
Ngooorgmlea_lcontainePrus/llke8ds-dns-sidecar-
aCmodn6t4a:i1n.e1r4.i4m”aaglere“agdcyr.ipor/esent on
m 40amc
hine
40m 1 kubelet, minikube spec.
cmoanct haine
s/k8s-dns-kube-
dns-
amd64:1.14.4”
already present
on
c4o0mntainers4{k0mubedns}1 kubelet, minikube
spec.
N 40omr
mal Creat1e
d
40m kubelet, mCinreikautebde cosnpteacin. er
c4o0nmtainers{dnsm
1 kubelet, minikube spec.
40amsq}
N“gocrr.mioa/lgoogle_cPonultlaeidners/k8s-dns-dnCsomnatasqin-
If some node fails, then the the STATUS will be not ready.
include:
* acellrtificatesigningrequests (aka 'csr')
* clusterrolebs indings
* cloumstpeorns e(vnatlsitdatounsleysf(oarkfaed'ces'r)ation apiservers)
* contrgomllearprsev(aiskiaon'csm')
* cdraoenmjoobnssets (aka 'ds')
* denepdlpooyimntesn(task(aa'keap'')deploy')
* ehvoerniztosn(ataklapo'edva')utoscalers (aka 'hpa')
* ijonbgsresses (aka 'ing')
* lniammitersapnagceess(a(akkaa'l'inms'i)ts')
* notdweos r(kapkoal'incoie')s (aka 'netpol')
* persistentvolumes l(aaikmas'p(avk')a 'pvc')
* podprisersuept tionbudgets (aka 'pdb')
* podse(caukrait'pyop'o) licies (aka 'psp')
* podtemplates
* repsoliucracstieotns c(aoknatr'orsll')er'squ(aokta')'rc')
* seartveifcuelsse(atska 'svc')
* sthtoirdapgeacrltaysrseessources
For example:
e6f 5| 3g4re6p2sk3u7-b3e6m1m7:1e0nt?min0ik0u:2b5e:3--8st/aurstrv/lmibc/v0ibratubaal8beo-x2/f4b-4
-in-cflounxfidgb 7."8c2893dse7c_oinndflsuaxgdob-grUapfa2n7a-
sqekcposn6d_skube-system_fk482s1_d2cf-
c57ba483-01c1ae478-2970c9a-
02g4c2ra.cio1/1g0o0o5gel_e9_aco71nbta6i1n4ers/pause-amd64:3.0
"P/OpaDu.dse8"dbe16c_in2fl9usxedcbon-gdrsafaagnoa-
qkUpps62_8ksuebcoe-nsdystem_f421d2cf-kc87sa_8- 1co1en7ta-
9in09ears-0/p2a4u2asec-
1a1m00d56e4_:633.064783d72155"/9p1a0uasfec"7
gcr.i2o9/gsoeocgolned_s
akguobe-sUyspte2m8 _sef3cfo8nad7sda-c7a8-11e7-9k089sa_-
0P2O4D2.adc81d1b0e0156e_c_f6h3eea1pes2t8er-gfrzl_
e
lpae1om_c
8drteo6.c f4tai
n4.." :v010.65e.10 "/dagcshr.bioo/agrodog--
2n9ersse/ckounbdesrnageotes-dUapsh2b8osaercdo-nds
kusb_ek-usybsetrenme_tef3s-cdda4s0hbbco-ca7rda8.5-1317e47e-79b0a9_ak-
0u2b4e2rance1t1e0s-0d5aes_h6b1o85a7rdf6-z8nmh3_
d 0p8acuasde
3""/6b80f 2g9csre.icoo/ngodosgalgeo_conUtapin28erss
e/cpoanudses-amd64:3.0
k8s_
Pf3OcDd4.d08bdcb-ce71a68c-_1k1ueb7e-9r0n9eat-e0s2-
d4a2ashc1b1o0a0r5de-z_n0m14h735_6k95udb0e2-sdy0sdtebm56_3c7
gkcurb.ieo-/agdodoognles-.csho"nta3in1 esresc/oknudbse-aagdodonU-
mp a3n0asgeer-amd64:v2
"/opt
/
cmoanndasger-host01_kukb8es-s_yksutbeme-a_d38d9obnf-
dm8a3n00a2gdedr.c68751e9862bb8522_3k0u9baee-3aad3dco2n_-
a"/fp0aducsbea"7d1db6be3b1bseefcbondsgacgro.io/gUoopg3le0_sceonta
iners/pause-amd64:3.0
choonstd0s1_kube-
systemk_83s8_9PbOfDd8.d380d0b2de1d6cc8_5keu82bbe5-a2d3d0o9ane-
m3aa3nca2g_er-
am9d4624f3:vc1.057.2da6b3c71"/alocalkgcurb.ieos/gtaorotg.l.e."_c3o7n
tconds
saeicnoenrds/sloacgminikube
aolkubUep- 36 se
This command will return the all containers which are running on
Nodes and Masters.
Networking Constraints
If the status is ready for all Nodes that means the all necessary
system processes are running on the Kubernetes Nodes.
Check the necessary pods running inside Kubernetes master and
Kuber- netes Nodes.
kube-dns-
910330662- 3/3 Running 0 1h 172.17.0.3 mkuibnei-
dp7xt
kuberne- 1/1 Running 0 1h 172.17.0.2 mkuibnei-
tes-dash-
board-
1pc46
Use:
$ docker ps
Example:
# journalctl -l -u kubel-eatpiserver
Querying the State of Kubernetes
Kubectl command line tool is used for getting the state of the
Kubernetes. Kubectl configured with the API server. API server is the
key component of the master. Kubectl utility is used to get the all
information about the Ku- bernetes Objects. It includes Pods,
Deployments, ReplicationControllers, PersistentStorage etc.
Status: Running
IP: 172.17.0.101
Created By: ReplicationController/influxdb-grafana
Controlled By: ReplicationController/influxdb-grafana
Containers:
influxdb:
Container ID:
docker://d43465cc2b130d736b83f465666c652a71d05a2a169eb72f2369c-
3d96723726c
Image: kubernetes/heapster_influxdb:v0.6
Image ID:
sdtoecrk_einr-
flpuuxldlabb@les:h//ak2u5b6e:7rn0bet3e4s
b/h65edaepf-36fd-
Port: 09f7574baafb57705d05ce17427ac41c3c82e
086dace9e6a-
State: <none>
Started: Running
Ready: Mon, 13 Nov 2017 14:11:52 +0000
Restart Count: True
Environment: 0
Mounts: <none>
/data from influxdb-storage (rw)
grafana: /var/run/secrets/kubernetes.io/
serviceaccount from default-token-3rdpl (ro)
Container ID:
Image:
7d5o0c3kde7r:a//49a890b86d459b94b841656ba
dd8770ef8565fd5e1220330d4f75ce2b-
gcrar.fiaon/gao:vo2g.l6e._0c-
o2ntainers/heapster_
Image ID: derosc/kheera-
psutlelar_bglera:/f/agncra.@ios/ghoao2g5l6e:_2c0o8nct9a8ibn--
Port:
State:
Started:
767fd14
2e4108a
a5d7777
05092ce
099d5e8
2b7f787
d467a32
43bf75b
-
<none>
Runnin
g
Mon, 13
Nov
2017
14:11:52
+0000
Ready: True
Restart Count: 0
Environment:
URLIN:
http://localhost:8086
FLUXDB_SERVICE_
ABLGEFD_A:
false
UTH_BASIC_EN-
MOGUFS__AEUNTAHB_LA
: true
ENDO: NY-
GMFO_UAUS_TOHR_GA_N Admin
RONLEY:-
GF_SERVER_ROOT_URL: /
Mounts: /var from grafana-storage (rw)
/
svearrv/ircuena/csceocurnettsf/rkoumbedrenfe
atuelst.-itoo/ken-3rdpl (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
influxdb-storage:
Type: :shaErmespatypDoidr’s(alitfeemtimpoer)ary directory
that
Medium:
grafana-storage:
Type: Ea mpopdty’sDlifre(taimteem) porary directory that
shares
Medium:
default-token-3rdpl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-3rdpl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
deployment.yaml
nlaatmadea::thae: llo-world
avbpepl:shello-world
semlealcictoasrL:abels:
aveptpc:hh1ello-world
temmeprl:advtaet:a:
labtaels:
app: hello-world
spcveecr:t: v1 rs:
- onnamaien:ehkeulbloe-jwa okr/lhdelloworld:latest
ipmoratgse:PullPolicy: Always
- containerPort: 3000
kubectl create -f deployment.yaml
deployment.yaml
nlaatmadea::thae: llo-world
avbpepl:shello-world
selelcictoasr:
matchLabels:
aveprp::vh1ello-
world
temmeptaladtaet:a:
laabpepl:s:hello-
world spveecr:: v1
c- onnatmaien: ehresl:lo-world
imagePkuullbPeojlaicyk:/hAelwlloa
wysorld:latest
-ports:
containerPort: 3000
kubectl delete command will be used for the deleting the kubernetes
com- ponents. In imperative method there are different way of deleting
the ku- bernetes components.
"foo" #kuDbeelcetledpeoledtseapnoddsse,srevricveicsews i-tlhnlaambel=nmaymLaeb=melyLabel. #kuD
ernetes Cluster•Managing Kubernetes with Dashboard Logging Kubernetes Cluster Upgrading Kubernetes
•
•
•
cAdvisor:
Kubelet:
InfluxDB and Grafana are used for storing the data and visualizing
it. Google cloud monitoring provides the hosted solution for
monitoring Kubernetes Cluster. Heapster can be set up to send the
LinOxide 10
metrics to Google Cloud monitoring.
LinOxide 11
$- admdadinoink-umbaenaadgndeorbn: senliasbt led
- hinefaapusstte-rs:tidriasgalbecldeladss:
- regisetrsy::ddissaabbleddisabled
egistry-creds:
LinOxide 11
In Minikube addons are helping for monitoring but it’s also possible to
add heapster as Kubernetes deployment. This will be the manual
installation of heapster, grafana and influxdb.
akpiniVde:
rSseirovni:cve1Accou
nt mneatmadea:thae:
apster
--n- amespace: kube-system
akpiniVde: rDseiopnlo:
yemxtentsions/v1beta1
mneatmadea: thae: apster
spneacm: espace: kube-
system rtemplpiclate: :1
mlaebtaedlsa:ta:
tka8ssk-a:
pmpo:nhietoarpisntger
spseercv:iceAccountName:
heapster c- onnatmaien: ehresa:
pster
imagePgucllrP.iool/igcoyo:
gIfleN_octoPnrteasiennetrs/heapster-amd64:v1.4.0 c-
o/hmemapasntedr:
-
--sinukr=cien=flkuuxbdebrn:hettteps::/h/mttopns:i/t/okruinbegr-
innefltuesx.ddbe.fkaublte-system. s--v-c:8086
akpiniVde: rSseirovni:cve1
mlaebtaedlsa:ta:
t#aFsko:r musoenaitsoariCnlguster add-on
(https://github.com/kubernetes/ku-
be#rnIfeyteosu/tarreee/NmOaTstuesr/icnlgutshteisr/asddanonasd)don,
you should comment out thkisulbineern. etes.io/cluster-service:
'true'
name:
kubernheapster
tes.io/name: Heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
Using Kubectl:
vpolruomtoecMolo: uTnCtPs:
- nmaomuen:tcPa-tche:r/teitficc/asstel/scerts
- rmeaoduOnntPlya:thtr:u/vear
envm: e: grafana-storage
- vnaalmuee::mINoFnLitUoXriDnBg_-iHnflOuSxTdb
- vnaalmuee::"G30F_0S0E" RVER_HTTP_PORT
cessib#lTe hveiafollowing env variables are required to make Grafana ac-
recom# mtheenkdubernetes api-server proxy. On production clusters,
we pose #threemgroavfai na these env variables, setup auth for
grafana, and ex-
- #nasemrev:icGeFu_AsiUnTg Ha _LBoAaSdIBCa_lEanNcAeBrLoErDa public IP.
- vnaalmuee::"GfaFl_sAe"UTH_ANONYMOUS_ENABLED
- vnaalmuee::"GtrFu_eA"UTH_ANONYMOUS_ORG_ROLE
- vnaalmuee::AGdFm_SiEnRVER_ROOT_URL
# Ivfayluoeu:'r/eapoin/vly1/unsainmgetshpeacAePsI/kSuebrvee-
srypsrteomxy/,sseertvtihceiss/vmalouneitionrs-tead: ing-
grvalaunea: /proxy
v- onlaummee:sc:a-
certificates
hpoasthP:at/eht:c/ssl/
certs
---
- enmamptey:Dgirra: f{a}na-storage
akpiniVde:
rSseirovni:cve
1
mlaebtaedlsa:t
a:
be#rnFeotreus/stereaes/ma Calsutesrte/crluadstde-
ro/and(dhottnpss)://github.com/kubernetes/ku- th#isIlfinyoe.u are
NOT using this as an addon, you should comment out
kubernetes.io/cnlaumstee:r-mseornviitcoer:in'trgu-ger' afana
name: monitoring-grafana
spneacm: espace: kube-system
a#n Ienxtaeprnroadl uLcotaidobnasleatnucpe,rwe recommend
accessing Grafana through # otyrpteh:rLoouagdhBaaplaunbcleicr IP.
ly#-gYeonuecroauteld palosrot use NodePort to expose the service
at a random- #potyrtpse:: NodePort
-
tpaorrgte:t8P0ort:
3000 sekl8esc-
taoprp: : grafana
natmadea:tma:onitoring-influxdb
namespace: kube-system
spreepcl:icas: 1
temmeptaladtaet:a:
latabsekls::monitori
ng spke8cs: -app:
influxdb c-
onnatmaien:einrsfl:u
xdb
ivmolaugme:egMcor.uion/tgso: ogle_containers/heapster-influxdb-
amd64:v1.3.3
nmaomuen:tiPnaflthu:x/dd
ba-tsatorage v-
onlaummee:si:nfluxdb-
storage
---
emptyDir: {}
akpiniVde:
rSseirovni:cve
1
mlaebtaedlsa:t
a:
t#aFsko:r musoenaitsoariCnlguster add-on
(https://github.com/kubernetes/ku-
be#rnIfeyteosu/tarreee/NmOaTstuesr/icnlgutshteisr/asddanonasd)don,
you should comment out thkisulbineern. etes.io/cluster-service:
'true'
nkaumbee:rmneotnesit.oior/ingm-inefl: umxodnbitoring-influxdb
namespace: kube-
system
spec:
ports:
- tpaorrgte:t8P0o8r6t: 8086
sekl8esc-taoprp: : influxdb
Using Kubectl:
kubernetes.io/minikube-addons-endpoint=heapster
netes.io/name=monitoring-
Annotations: grafana
kubectl.kubernetes.io/last-applied-configura-
tion={“apiVersion”:”v1”,”kind”:”Service”,”meta-
data”:{“annotations”:{},”labels”:{“addonmanager.
Type:
Selector:
NodePort
k Reconcile”,”kubernetes.io/
m acidlde,onnammaen=aingeflru.kxuGbraefranneates.
u io/mode=Recon-
i
b
n
e
i
k
r
u
n
b
e
t
e
e
-
s
a
.
d
i
o
d
/
o
m
n
o
s
.
d
.
.
e
”
:
”
IPPo:rt: NePort: 1<0u.n0s.0e.t6>28TCP
<30943/TCP
ESenodspiooninAtsf
i:f nity: N17u2n.s1e7t.>0.9:3000
Evsents: <noonnee>
Prometheus and Data Dog are also good tools for monitoring
the Kubernetes Cluster.
$amshinbiokaurbdewaadsdsouncsceenssafbullelydeanshaobaleadrd
Opmeinniinkgubkeubaedrdnoentsesopseernvidcaeshkubbe-rsdystem/kubernetes-dashboard in
default browser...
But the dashboard is just not limited to information, it's even possible
to ex- ecute in the pod, get the logs of the pod, edit the pod and
delete the pod.
This is only for Pod. For other objects like deployments, it's possible to
scale, edit and delete the deployment.
Discovery and Load Balancing consists the information about
the Ingresses and Services.
Kdausbhebcotlacrdre/mateas-tfehr/tstprcs/:d/
erapwlo.yg/irthecuobmusmerecnodnetde/nktu.cboemrn/keutebse-drnasehtebso/ard.
yaml
To access the Web UI,
kubectl proxy
If the username and password are configured and unknown to you then
use,
In the most basic logging it’s possible to write the logs to the standard
output using the Pod specification.
For Example:
akmpineiVde: rPsoiodn: v1
spnatmadea:tcao:unter
c- oenc:tmaien:ecrosu: nt
inmaage: busybox
zwworld-493621601-
1/1 Running 0 19d
hneqldlo67-world-
493621601-
hello-world- 1/1 Running 0 19d
493621601-qkfcx
$ kubectl logs counter
0: Fri Dec 1 16:37:36 UTC 2017
1: Fri Dec 1 16:37:37 UTC 2017
2: Fri Dec 1 16:37:38 UTC 2017
3: Fri Dec 1 16:37:39 UTC 2017
4: Fri Dec 1 16:37:40 UTC 2017
5: Fri Dec 1 16:37:41 UTC 2017
6: Fri Dec 1 16:37:42 UTC 2017
7: Fri Dec 1 16:37:43 UTC 2017
8: Fri Dec 1 16:37:44 UTC 2017
The most important part of Node level logging is log rotation with
the Kubernetes. With the help of log rotation, it ensures the logs
will not consume all storage space of the nodes.
Cluster Level Logging with Kubernetes:
Kubernetes does not provide the native cluster level logging. But the
clus- ter level logging is possible with following approaches:
The most used and recommended method is using the node agent
for log collection and storing the logs in log storage.
k-u-f
ibleecntlaampeplhytt\ps://raw.githubusercontent.com/giantswarm/kuberne-
tes-elastic-stack/master/manifests-all.yaml
minikube service kibana
Upgrading Kubernetes
cluster/gce/upgrade.sh release/stable
upgrade-cluster.yml
- gh-aotshtes:r_lofaccatlsh:oFsatlse
roles:kubespray-defaults}
- ahnoys_tes:rrko8rss-_cflautsatle:r":{e{tacndy:c_aelrircoor-
sr_rfatal | default(true) }}" vanrss:ible_ssh_pipelining:
true
gather_facts: true
- ahnoys_tes:rrko8rss-_cflautsatle:r":{e{tacndy:c_aelrircoor-
sr_rfatal | default(true) }}" sroerleiasl:: "{{ serial |
default('20%') }}"
roles:
- { role: kvauublet,sptargasy:-dveafualut,ltvsa,uwlth_ebno:o"tcsetrat_pm: tarnuae,gewmhen:t
"=c=er'vta_umlta'"n}-
agement == 'vault'"
}
- ahnoys_tes:rreotcrsd_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
etcd, tags: etcd, etcd_cluster_setup: true }
roles:
- { role: kubespray-defaults}
etcd, tags: etcd, etcd_cluster_setup: false }
#- Fhionsatsll:ykhuabned-nleodweo:r!kuerbeu-pmgarastdeers, based
on given batch size asenryi_aelr: r"o{{rse_rfaiatal
l|:d"e{{faaunlyt_(e'2r0r%or')s_}}f"atal | default(true) }}"
- ahnoys_tes:rrkourbs_e-fmatals:tetrru[0e]
ro- l{erso:le: kubespray-defaults}
"se- c{rreotl_ec:hkaunbgeerdn|deteefsa-ualpt(pfsa/lrsoet)a" t}e_tokens, tags:
rotate_tokens, when:
- ahnoys_tes:rrkourbs_e-fmatals:tetrrue
ro- l{erso:le: kubespray-defaults}
- { role: kubernetes-apps/npoetliwcoy_rcko_nptluroglilne,r,tataggss:
:npeotwlicoyr-kco} ntroller }
- { role: kubernetes/client, tags: client }
- hosts: calico-rr
aronlye_se:rrors_fatal: "{{ any_errors_fatal | default(true) }}"
- { role: knuetbweosprkra_yp-lduegfianu/cltasl}ico/rr, tags: network }
- ahnoys_tes:rrko8rss-_cflautsatle:r"{{ any_errors_fatal |
default(true) }}" ro- l{erso:le: kubespray-defaults}
dn-s{mroalseq: }dnsmasq, when: "dns_mode ==
'dnsmasq_kubedns'", tags: so-lv{croonlef_:
mkuobdeern==et'ehso/sptr_erienssotlavlcl,ownfh'e",nt:a"gdsn:
rs_esmoolvdceo!n=f'}none' and re-
roles:
- { role: kubernetes-apps,
kubespray-defaults}
tags: apps }
cAdvisor:
cAdvisor is an open source container usage and performance
analysis agent. In Kubernetes cAdvisor is included into Kubelet
binary.cAdvisor auto-discovers all containers. It collects the
information of CPU, memory, network and file system usage
statistics.
Kubelet:
Kubelet bridge the gap between Kubernetes Master and Kubernetes
Node. It manages the Pods and Containers on each machine.
InfluxDB and Grafana are used for storing the data and visualizing
it. Google cloud monitoring provides the hosted solution for
monitoring Kubernetes Cluster. Heapster can be set up to send the
metrics to Google Cloud monitoring.
LinOxide 13
supports addons.
LinOxide 13
$- admdadinoink-umbaenaadgndeorbn: senliasbt led
- hinefaapusstte-rs:tidriasgalbecldeladss:
- regisetrsy::ddissaabbleddisabled
egistry-creds:
LinOxide 13
In Minikube addons are helping for monitoring but it’s also possible to
add heapster as Kubernetes deployment. This will be the manual
installation of heapster, grafana and influxdb.
akpiniVde:
rSseirovni:cve1Accou
nt mneatmadea:thae:
apster
--n- amespace: kube-system
akpiniVde: rDseiopnlo:
yemxtentsions/v1beta1
metadata:
name: heapster
namespace: kube-system
spec:
r plic : 1
template:
metadata:
latabsekls: :monitor
ing spke8cs: -app:
heapster
sceornvtiacienAecrcs:ountName: heapster
- inmaamgee:: hgecar.pios/tgeorogle_containers/heapster-
amd64:v1.4.0 icmomagmePaunldlP: olicy: IfNotPresent
- /--hseoauprsctee=rkubernetes:https://kubernetes.default
s--v-c:8- 0--8s6ink=influxdb:http://monitoring-influxdb.kube-
system. akpiniVde: rSseirovni:cve1
mlaebtaedlsa:ta:
t#aFsko:r musoenaitsoariCnlguster add-on
(https://github.com/kubernetes/ ku#bIefrynoeuteasr/terNeeO/mT
ausstienrg/ctlhuisstaesr/anddaodndso)n, you should comment out
thkisulbineern. etes.io/cluster-service: 'true'
name: heapster
namespace: kube-system
kubern tes.io/name: Heapster
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
You can get the latest version of the heapster at
https://github.com/ kubernetes/heapster/ .
Using Kubectl:
$ kubectl create -f heapster.yaml
akpiniVde: rDseiopnlo:
yemxtentsions/v1beta1
mneatmadea:tma:onitoring-
grafana spneacm: espace:
kube-system rtemplpiclate: :1
mlaebtaedlsa:ta:
tka8ssk-a:
pmpo:ngirtaofraina
g spcoenc:tainers:
- inmaamgee:: ggrcarf.iaon/gaoogle_containers/heapster-grafana-
amd64:v4.4.3 p- coortnst:ainerPort: 3000
vpolruomtoecMolo: uTnCtPs:
- nmaomuen:tcPa-tche:r/teitficc/asstel/scerts
readOnly:
true
- nmaomuen:tgPraathfa:n/vaa-srtorage
e- nva:me: INFLUXDB_HOST
- vnaalmuee::mGFo_nSiEtoRrVinEgR-i_nHflTuTxPd_bPORT
v#aTluhe f"o3l0lo0w0"ing env variables are required to make
Grafana acces#sitbhle kvuiabernetes api-server proxy. On
production clusters, we recom# mreemnodving these env variables,
setup auth for grafana, and expos#estehrevigcreafuasninag a
LoadBalancer or a public IP.
- vnaalmuee::"GfaFl_sAe"UTH_BASIC_ENABLED
- vnaalmuee::"GtrFu_eA"UTH_ANONYMOUS_ENABLED
- vnaalmuee::AGdFm_AinUTH_ANONYMOUS_ORG_ROLE
- #naIfmyeo:uG'rFe_oSnElRyVuEsRin_gRtOhOeTA_PUIRSLerver proxy, set
this value instead: grafa#nav/aplruoex: y/api/v1/namespaces/kube-
system/services/monitoring-
vovlualmues:/
- hnoasmtPea: tcha:-certificates
- npaamthe:: /gertacf/assnla/c-setrotrsage
---
emptyDir: {}
apiVersion: v1
kminetda:dSaetrav:ice
la#bFeolsr:use as a Cluster add-on (https://github.com/kubernetes/
ku#bIefrynoeuteasr/terNeeO/mT
ausstienrg/ctlhuisstaesr/anddaodndso)n, you should comment out
thkisulbineern. etes.io/cluster-service: 'true'
nkaumbee:rmneotnesit.oior/ingm-gera:
fmanoanitoring-grafana spneacm: espace: kube-
system
a#n Ienxtaeprnroadl uLcotaidobnasleatnucpe,rwe recommend
accessing Grafana through # otyrpteh:rLoouagdhBaaplaunbcleicr IP.
g#enYeoruatceodupldoratlso use NodePort to expose the service at a
randomly- #potyrtpse:: NodePort
-
tpaorrgte:t8P0ort:
3000 sekl8esc-
taoprp: : grafana
akpiniVde: rDseiopnlo:
yemxtentsions/v1beta1
mneatmadea:tma:onitoring-
influxdb spneacm: espace:
kube-system rtemplpiclate: :1
mlaebtaedlsa:ta:
tka8ssk-a:
pmpo:ninitflourixndgb
spcoenc:tainers:
- inmaamgee:: igncflr.uiox/dgboogle_containers/heapster-influxdb-
amd64:v1.3.3 v- omluomunetMPaotuhn: t/sd:ata
vonluamees: :influxdb-storage
---
- enmamptey:Dinirfl: u{}xdb-storage
akpiniVde:
rSseirovni:cve
1
mlaebtaedlsa:t
a:
t#aFsko:r musoenaitsoariCnlguster
add-on (https://github.com/kubernetes/
kubernetes/tree/master/cluster/addons)
th#isIlfinyoe.u are NOT using this as an addon, you should comment out
kube:rmneotnesit.oior/icnlaumstee:r-mseornviitcoer:in'trgu-ien'fluxdb
p- poortrst:: 8086
setalergcetotPr:ort: 8086
k8s-app: influxdb
mSadeildneicktourb:e-addons... rnetes.io/mode=Recon-
cTiyle,onnammaen=aingeflru.kxuGbraefaNnoadePort
IPP:pe:
1<0u.n0s.0e.t6>2
ort: 80/TCP
LinOxide 143
NESenodspePoionrtts:: <17u2n.s1e7t.>0.9:3000 30943/TCP
<none>
Evsenitosn: Affinity
:
Prometheus and Data Dog are also good tools for monitoring
the Kubernetes Cluster.
damshinbiokaurbdewaadsdsouncsceenssafbullelydeanshableadrd
$Opmeinniinkgubkeubaedrdnoentsesopseernvidcaeshkubboea-rsdystem/kubernetes-dashboard in
default browser...
LinOxide 14
It gives the brief information about the cluster. In namespaces, it
shows the all available namespaces in Kubernetes Cluster. It’s
possible to select all namespaces or specific namespace
Depending on the namespace selected previously, further tabs
show in depth information of associated Kubernetes object
within selected namespace. It consists the workloads of
Kubernetes which includes
Deployments, Replica Sets, Replication Controller, Daemon Sets, Jobs,
Pods and Stateful Sets. Each of this workload is important to run an
application over the Kubernetes Cluster.
example, Minikube addon is used for the Kubernetes dashboard.For manual installation following YAML sho
Kdausbhebcotlacrdre/mateas-tfehr/tstprcs/:d/ erapwlo.yg/irthecuobmusmerecnodnetde/nktu.cb
yaml
If the username and password are configured and unknown to you then
use,
In the most basic logging it’s possible to write the logs to the standard
output using the Pod specification.
For Example:
akmpineiVde: rPsoiodn: v1
spnatmadea:tcao:unter
c- oenc:tmaien:ecrosu: nt
inmaage: busybox
kubectl get po
NcoAu R1/E1ADY SRTuAnTnUinSg R0 ESTARTS
A8sGE
MntEer
zwworld-493621601-
1/1 Running 3 19d
world-493621601-
hxbelfl6os-world-
493621601-
$0:kFurbi eDcetcl
1
lo1g1s6c:3o7u:n36teUr TC 2017
3
2: Fri Dec 1 16:37:38 UTC
2017
The most important part of Node level logging is log rotation with
the Kubernetes. With the help of log rotation, it ensures the logs
will not consume all storage space of the nodes.
Cluster Level Logging with Kubernetes:
The most used and recommended method is using the node agent
for log collection and storing the logs in log storage.
k-u-f
ibleecntlaampeplhytt\ps://raw.githubusercontent.com/giantswarm/
kubernetes-elastic-stack/master/manifests-all.yaml
minikube service kibana
LinOxide 15
The logging will be enabled and you can check it through Kibana
dashboard. If you are using Google Kubernetes Engine, then
stackdriver is a default logging option for GKE.
LinOxide 15
Upgrading Kubernetes
cluster/gce/upgrade.sh -M v1.0.2
cluster/gce/upgrade.sh release/stable
- ahnoys_tes:rrko8rss-_cflautsatle:r":{e{tacndy:c_aelrircoor-
sr_rfatal | default(true) }}" gvarths:er_facts: false
re#quNireetdtytoindsisuadbolerpsispeetl,inwihnigchfomr
baokoestsptriappe-lionsinasgsome systems have
ca#nfbaeil.enboabotlestdr.ap-os fixes this on these systems, so in later
plays it roalnessi:ble_ssh_pipelining: false
- { role: kbuoobtesstprarapy-o-dse, ftaugslt:sb} ootstrap-os}
- ahnoys_tes:rrko8rss-_cflautsatle:r":{e{tacndy:c_aelrircoor-
sr_rfatal | default(true) }}" vanrss:ible_ssh_pipelining:
true
gather_facts: true
- ahnoys_tes:rrko8rss-_cflautsatle:r":{e{tacndy:c_aelrircoor-
sr_rfatal | default(true) }}" sroerleiasl:: "{{ serial |
default('20%') }}"
LinOxide 15
- role:
tags: rkt
when: "'rkt' in [e cd_deployment_type, kubelet_deployment_type,
vault_deployment_type]"
- { role: download, tags: download, skip_downloads: false }
roles:
- { role:
role: kubespray-defaults}
etcd, tags: etcd, etcd_cluster_setup: true }
roles:
- { role:
role: kubespray-defaults}
etcd, tags: etcd, etcd_cluster_setup: false }
#- Fhionsatsll:ykhuabned-nleodweo:r!kuerbeu-pmgarastdeers,
based on given batch size asenryi_aelr: r"o{{rse_rfaiatal
l|:d"e{{faaunlyt_(e'2r0r%or')s_}}f"atal | default(true) }}"
- ahnoys_tes:rrkourbs_e-fmatals:tetrru[0e]
ro- l{erso:le: kubespray-defaults}
"se- c{rreotl_ec:hkaunbgeerdn|deteefsa-ualpt(pfsa/lrsoet)a" t}e_tokens, tags:
rotate_tokens, when:
- hosts: kube-master
aronlye_se:rrors_fatal: true
- { role: kuberpnreateys-d-aepfapus/lntse}twork_plugin, tags: network }
- { role: kubernetes/calpiepns/tp, otaligcsy:_ccloienntrto}ller, tags: policy-controller }
roles:
- { role: network_plugin/calico/rr,
kubespray-defaults} tags: network }
- ahnoys_tes:rrko8rss-_cflautsatle:r"{{ any_errors_fatal |
default(true) }}" ro- l{erso:le: kubespray-defaults}
dn-s{mroalseq: }dnsmasq, when: "dns_mode ==
'dnsmasq_kubedns'", tags: re-so{ lrvocloen:
kf_umbeordnee=te=s'/hporseti_nrsetsaolll,vwcohnefn'":,
"tdangs:_rmeosodlevc!=on'nfo}ne' and
roles:
- { role: kubernetes-apps, tags: apps }
kubespray-defaults}