Foundations of Red Hat Cloud-Native
Foundations of Red Hat Cloud-Native
Foundations of Red Hat Cloud-Native
The contents of this course and all its modules and related materials, including handouts to audience members, are
Copyright © 2022 Red Hat, Inc.
No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but
not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of
Red Hat, Inc.
This instructional program, including all material provided herein, is supplied without any guarantees from Red Hat,
Inc. Red Hat, Inc. assumes no liability for damages or legal action arising from the use or misuse of contents or details
contained herein.
If you believe Red Hat training materials are being used, copied, or otherwise improperly distributed, please send
email to [email protected] or phone toll-free (USA) +1 (866) 626-2994 or +1 (919) 754-3700.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, JBoss, OpenShift, Fedora, Hibernate, Ansible, CloudForms,
RHCA, RHCE, RHCSA, Ceph, and Gluster are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries
in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United
States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is a trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open
source or commercial project.
The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks
of OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's
permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the
OpenStack community.
DO100A-K1.22-en-2-r00000000 vii
viii DO100A-K1.22-en-2-r00000000
Document Conventions
This section describes various conventions and practices used throughout all
Red Hat Training courses.
Admonitions
Red Hat Training courses use the following admonitions:
References
These describe where to find external documentation relevant to a
subject.
Note
These are tips, shortcuts, or alternative approaches to the task at hand.
Ignoring a note should have no negative consequences, but you might
miss out on something that makes your life easier.
Important
These provide details of information that is easily missed: configuration
changes that only apply to the current session, or services that need
restarting before an update will apply. Ignoring these admonitions will
not cause data loss, but may cause irritation and frustration.
Warning
These should not be ignored. Ignoring these admonitions will most likely
cause data loss.
Inclusive Language
Red Hat Training is currently reviewing its use of language in various areas
to help remove any potentially offensive terms. This is an ongoing process
and requires alignment with the products and services covered in Red Hat
Training courses. Red Hat appreciates your patience during this process.
DO100A-K1.22-en-2-r00000000 ix
x DO100A-K1.22-en-2-r00000000
Introduction
DO100A-K1.22-en-2-r00000000 xi
Introduction
• Microsoft Windows 10
Memory 8 GB 16 GB or more
You must have permissions to install additional software on your system. Some hands-on learning
activities in DO100a provide instructions to install the following programs:
• Minikube (Optional)
You might already have these tools installed. If you do not, then wait until the day you start this
course to ensure a consistent course experience.
Important
Hands-on activities also require that you have a personal account on GitHub and a
public, free internet service.
xii DO100A-K1.22-en-2-r00000000
Introduction
• If you use Bash as the default shell, then your prompt might match the [user@host ~]$
prompt used in the course examples, although different Bash configurations can produce
different results.
• If you use another shell, such as zsh, then your prompt format will differ from the prompt used
in the course examples.
• When performing the exercises, interpret the [user@host ~]$ prompt used in the course as a
representation of your system prompt.
Ubuntu
• When performing the exercises, interpret the [user@host ~]$ prompt used in the course as a
representation of your Ubuntu prompt.
macOS
• When performing the exercises, interpret the [user@host ~]$ prompt used in the course as a
representation of your macOS prompt.
Microsoft Windows
• Windows does not support Bash natively. Instead, you must use PowerShell.
• When performing the exercises, interpret the [user@host ~]$ Bash prompt as a
representation of your Windows PowerShell prompt.
• For some commands, Bash syntax and PowerShell syntax are similar, such as cd or ls. You can
also use the slash character (/) in file system paths.
• For other commands, the course provides help to transform Bash commands into equivalent
PowerShell commands.
• The Windows firewall might ask for additional permissions in certain exercises.
DO100A-K1.22-en-2-r00000000 xiii
Introduction
Alternatively, you can type commands in one line on all systems, such as:
xiv DO100A-K1.22-en-2-r00000000
Chapter 1
DO100A-K1.22-en-2-r00000000 1
Chapter 1 | Introducing Containers and Kubernetes
Objectives
After completing this section, you should be able to get an overview of what containers are, how
they improve the software life cycle, and samples of different container runtimes.
Traditional Applications
Traditional software applications typically depend on other libraries, configuration files, or services
that are provided by the runtime environment. The application runtime environment is a physical
host or virtual machine (VM) and application dependencies are installed as part of the host.
For example, consider a Python application that requires access to a common shared library that
implements the TLS protocol. A system administrator installs the required package that provides
the shared library before installing the Python application.
The major drawback to a traditional deployment is that the application's dependencies are mixed
with the runtime environment. Because of this, an application might break when any updates or
patches are applied to the base operating system (OS).
For example, an update to the TLS shared library removes TLS 1.0 as a supported protocol.
Updating the library breaks a Python application that is strictly dependent on TLS 1.0. The system
administrator must downgrade the library to keep the application running, but this prevents other
applications from using the benefits of the updated package.
To alleviate potential breakages, a company might maintain a full test suite to guarantee OS
updates do not affect applications.
Containerized Applications
Deploying applications using containers is an alternative to the traditional methods. A container is
a set of one or more processes that are isolated from the rest of the system.
2 DO100A-K1.22-en-2-r00000000
Chapter 1 | Introducing Containers and Kubernetes
Containers provide many of the same benefits as virtual machines, such as security, storage,
and network isolation. Containers require fewer hardware resources and are quick to start and
terminate. They also isolate the libraries and the runtime resources, such as CPU and storage, and
minimize the impact of OS updates.
Beyond improving efficiency, elasticity, and reusability of hosted applications, container usage
improves application portability. The Open Container Initiative (OCI) provides a set of industry
standards that define a container runtime specification and a container image specification. The
image specification defines the format for the bundle of files and metadata that form a container
image. When you build a container image compliant with the OCI standard, you can use any OCI-
compliant container engine to execute the contained application.
There are many container engines available to manage and execute containers, including Rocket,
Drawbridge, LXC, Docker, and Podman.
Environment isolation
Containers work in a closed environment where changes made to the host OS or other
applications do not affect the container. Because the libraries needed by a container are self-
contained, the application can run without disruption. For example, each application can exist
in its own container with its own set of libraries. An update made to one container does not
affect other containers.
Quick deployment
Containers deploy quickly because there is no need to install the entire underlying operating
system. Normally, to support isolation, a host requires a new OS installation, and any update
might require a full OS restart. A container restart does not require stopping any services on
the host OS.
Reusability
The same container can be reused without the need to set up a full OS. For example, the
same database container that provides a production database service can be used by each
developer to create a development database during application development. By using
containers, there is no longer a need to maintain separate production and development
database servers. A single container image is used to create instances of the database
service.
Often, a software application with all of its dependent services (databases, messaging, file
systems) are made to run in a single container. This can lead to the same problems associated
with traditional software deployments to virtual machines or physical hosts. In these instances, a
multicontainer deployment might be more suitable.
Furthermore, containers are an ideal approach when using microservices for application
development. Each service is encapsulated in a lightweight and reliable container environment
DO100A-K1.22-en-2-r00000000 3
Chapter 1 | Introducing Containers and Kubernetes
In contrast, many applications are not well suited for a containerized environment. For example,
applications accessing low-level hardware information, such as memory, file systems, and devices
may be unreliable due to container limitations.
References
Home - Open Containers Initiative
https://www.opencontainers.org/
4 DO100A-K1.22-en-2-r00000000
Chapter 1 | Introducing Containers and Kubernetes
Objectives
After completing this section, you should be able to recognize Kubernetes as a container
orchestration tool.
Limitations of Containers
Containers provide an easy way to package and run services. As the number of containers
managed by an organization grows, the manual work of managing them grows disproportionately.
When using containers in a production environment, enterprises often require the following
capabilities:
Kubernetes Overview
Kubernetes is a container orchestration platform that simplifies the deployment, management,
and scaling of containerized applications.
A pod is the smallest manageable unit in Kubernetes, and consists of at least one container.
Kubernetes also uses pods to manage the containers within and their resource limits as a single
unit.
Kubernetes Features
Kubernetes offers the following features on top of a container engine:
Horizontal scaling
Applications can scale up and down manually or automatically with a configuration set, by
using either the command-line interface or the web UI.
Self-healing
Kubernetes can use user-defined health checks to monitor containers to restart and
reschedule them in case of failure.
DO100A-K1.22-en-2-r00000000 5
Chapter 1 | Introducing Containers and Kubernetes
Automated rollout
Kubernetes can gradually release updates to your application's containers while checking their
status. If something goes wrong during the rollout, Kubernetes can roll back to the previous
version of the application.
Operators
Operators are packaged Kubernetes applications that bring the knowledge of application
lifecycles into the Kubernetes cluster. Applications packaged as Operators use the Kubernetes
API to update the cluster's state by reacting to changes in the application state.
References
Production-Grade Container Orchestration - Kubernetes
https://kubernetes.io/
6 DO100A-K1.22-en-2-r00000000
Chapter 1 | Introducing Containers and Kubernetes
Summary
In this chapter, you learned:
• Applications running in containers are decoupled from the host operating system's libraries.
• Among other features, container orchestration platforms provide tooling to automate the
deployment and management of application containers.
DO100A-K1.22-en-2-r00000000 7
8 DO100A-K1.22-en-2-r00000000
Chapter 2
Running Containerized
Applications
Goal Spin-up your first application in Kubernetes.
DO100A-K1.22-en-2-r00000000 9
Chapter 2 | Running Containerized Applications
Objectives
After completing this section, you should be able to see the differences between several
Kubernetes implementations, and understand how to prepare different Kubernetes flavours for
this course.
Kubernetes Distributions
Kubernetes has historically been a general solution for container management and orchestration.
With this versatility, Kubernetes can solve the same problems in different ways depending
on needs and opinions. Because of this, Kubernetes has evolved into different opinionated
distributions based on:
• The target size of the cluster: From small single-node clusters to large-scale clusters of
hundreds of thousands of nodes.
• The location of the nodes: Either locally on the developer workstation, on premises (such as a
private data center), on the cloud, or a hybrid solution of those two.
The following table shows a classification for some of the most popular Kubernetes distributions:
Note
This course supports minikube (version 1.20.0) for local development and
Developer Sandbox for remote development. Instructions and exercises have
been tested in the following operating systems:
10 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Visit the links in the References section for a comprehensive list of Kubernetes certified
distributions.
Kubernetes Extensions
Kubernetes is highly extendable for adding more services to the platform. Each distribution
provides different approaches (or none) for adding capabilities to Kubernetes:
DNS
DNS allows internal name resolution inside the cluster, so pods and services can refer to others by
using a fixed name.
Both minikube and OpenShift include a CoreDNS controller that provides this feature.
Dashboard
The dashboard provides a graphical user interface to Kubernetes.
minikube provides an add-on and utility commands for using the general-purpose Dashboard
open source application. OpenShift includes the Console, a dedicated application that integrates
most of the Kubernetes extensions provided by OpenShift.
Ingress
The ingress extension allows traffic to get into the cluster network, redirecting requests from
managed domains to services and pods. Ingress enables services and applications inside the
cluster to expose ports and features to the public.
Note
You must install the ingress add-on for minikube for some exercises. Refer to
Guided Exercise: Contrasting Kubernetes Distributions for instructions.
Storage
The storage extension allows pods to use persistent storage and nodes to distribute and share the
storage contents.
OpenShift bases its storage strategy on Red Hat OpenShift Data Foundation, a storage
provider supporting multiple storage strategies across nodes and hybrid clouds. minikube
provides out-of-the-box storage by using the underlying storage infrastructure (either local
the file system or the virtual machine's file-system). This feature is provided by the storage-
provisioner add-on. minikube also provides a storage-provisioner-gluster add-on
that allows Kubernetes to use Gluster as shared persistent storage.
DO100A-K1.22-en-2-r00000000 11
Chapter 2 | Running Containerized Applications
minikube provides the user with an administrator minikube account, so users have total control
over the cluster.
Different OpenShift implementations differ on authentication features, but all of them agree on
avoiding the use of administration accounts. Developer Sandbox provides limited access to the
user, restricting them to the username-dev and username-stage namespaces.
Operators
Operators are a core feature of most Kubernetes distributions. Operators allow automated
management of applications and Kubernetes services, by using a declarative approach.
minikube requires the olm add-on to be installed to enable operators in the cluster.
OpenShift distributions enable operators by default, despite the fact that Kubernetes-as-a-
Service platforms usually restrict user-deployed operators. Developer Sandbox does not allow
users to install operators, but comes with the RHOAS-Operator and the Service Binding
Operator by default.
DNS
12 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
References
minikube documentation
https://minikube.sigs.k8s.io/docs/
Developer Sandbox
https://developers.redhat.com/developer-sandbox
DO100A-K1.22-en-2-r00000000 13
Chapter 2 | Running Containerized Applications
Guided Exercise
Outcomes
You should be able to:
• Register for using a remote Kubernetes instance by using Developer Sandbox for
Red Hat OpenShift.
Instructions
Note
Installing a local Kubernetes cluster requires administrative privileges in your
development workstation. If you do not have administrative privileges then jump
directly to Guided Exercise: Contrasting Kubernetes Distributions to use a remote
Kubernetes cluster.
Deploying a fully developed, multi-node Kubernetes cluster typically requires significant time and
compute resources. With minikube, you can quickly deploy a local Kubernetes cluster, allowing
you to focus on learning Kubernetes operations and application development.
minikube is an open source utility that allows you to quickly deploy a local Kubernetes cluster on
your personal computer. By using virtualization technologies, minikube creates a virtual machine
(VM) that contains a single-node Kubernetes cluster. VMs are virtual computers and each VM is
allocated its own system resources and operating system.
The latest minikube releases also allow you to create your cluster by using containers instead
of virtual machines. Nevertheless, this solution is still not mature, and it is not supported for this
course.
• An Internet connection
• At least 2 GB of free memory
• 2 CPUs or more
• At least 20 GB of free disk space
• A locally installed hypervisor (using a container runtime is not supported in this course)
Before installing minikube, a hypervisor technology must be installed or enabled on your local
system. A hypervisor is software that creates and manages virtual machines (VMs) on a shared
physical hardware system. The hypervisor pools and isolates hardware resources for VMs, allowing
many VMs to run on a shared physical hardware system, such as a server.
14 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Note
Prefix the following commands with sudo if you are running a user without
administrative privileges.
Use your system package manager to install the complete set of virtualization
libraries:
• If the repositories for your package manager do not include an appropriate version
for minikube, then go to https://github.com/kubernetes/minikube/releases and
download the latest release matching your operating system.
DO100A-K1.22-en-2-r00000000 15
Chapter 2 | Running Containerized Applications
Note
To set the default driver, run the command minikube config set driver
DRIVER.
2. Open the downloaded dmg file and follow the onscreen instructions to complete
the installation.
Note
Network connectivity might be temporarily lost while VirtualBox installs virtual
network adapters. A system reboot can also be required after a successful
installation.
16 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Alternatively, if the brew command is available in your system, then you can install
VirtualBox using the brew install command.
Your output can differ, but must show the available version and the commit it is based
on.
DO100A-K1.22-en-2-r00000000 17
Chapter 2 | Running Containerized Applications
Note
To set the default driver, run the command minikube config set driver
DRIVER.
Warning
System driver conflicts might occur if more than one hypervisor is installed or
enabled. Do not install or enable more than one hypervisor on your system.
1. Download the latest version of VirtualBox for Windows Hosts from https://
virtualbox.org/wiki/Downloads
Note
Network connectivity might be temporarily lost while VirtualBox installs virtual
network adapters. A system reboot can also be required after a successful
installation.
• Via PowerShell
18 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
• Via Settings
– In the search box on the taskbar, type Programs and Features, and select it
from the search results.
– Select Turn Windows features on or off from the list of options under Control
Panel Home.
2. Determine the name of the network adapter, such as Wi-Fi or Ethernet, to use by
running Get-NetAdapter.
3. Create an external virtual switch named minikube that uses the selected
network adapter and allows the management operating system to share the
adapter:
Note
If you executed the minikube-installer.exe installer from a terminal window,
close the terminal and open a new one before you start using minikube.
DO100A-K1.22-en-2-r00000000 19
Chapter 2 | Running Containerized Applications
Note
To set the default driver, run the command minikube config set driver
DRIVER.
20 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
In case of errors, make sure you are using the appropriate driver during the installation, or
refer to minikube Get Started documentation [https://minikube.sigs.k8s.io/docs/start/] for
troubleshooting.
5. Adding extensions
minikube comes with the bare minimum set of features. To add more features, minikube
provides an add-on based extension system. Developers can add more features by
installing the needed add-ons.
Use the minikube addons list command for a comprehensive list of the add-ons
available and the installation status.
• Installing the Ingress Add-on. For this course you must install the ingress add-on.
With your cluster up and ready, use the following command to enable the add-on:
Versions and docker images can vary in your deployment, but make sure the final validation
is successful.
• Installing the Dashboard add-on. The dashboard add-on is not required for this course
but serves as a visual graphical interface if you are not comfortable with CLI commands.
Once the dashboard is enabled you can reach it by using the minikube dashboard
command. This command will open the dashboard web application in your default browser.
Press Ctrl+C in the terminal to finish the connection to the dashboard.
DO100A-K1.22-en-2-r00000000 21
Chapter 2 | Running Containerized Applications
6. Using a Developer Sandbox for Red Hat OpenShift as a Remote Kubernetes cluster
Developer Sandbox for Red Hat OpenShift is a free Kubernetes-as-a-Service
platform offered by Red Hat Developers, based on Red Hat OpenShift.
Developer Sandbox allows users access to a pre-created Kubernetes cluster. Access is
restricted to two namespaces (or projects if using OpenShift nomenclature). Developer
Sandbox deletes pods after eight consecutive hours of running, and limits resources to
7 GB of RAM and 15 GB of persistent storage.
You need a free Red Hat account to use Developer Sandbox. Log in to your Red Hat
account, or if you do not have one, then click Create one now. Fill in the form
choosing a Personal account type, and then click CREATE MY ACCOUNT. You
might need to accept Red Hat terms and conditions to use the Developer Program
services.
When the account is ready you will be redirected back to the Developer Sandbox
page. Click Launch your Developer Sandbox for Red Hat OpenShift to log in to
Developer Sandbox.
22 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
If you just created your account, then you might need to wait some seconds for
account approval. You might need to verify your account via 2-factor authentication.
Once the account is approved and verified, Click Start using your sandbox. You might
need to accept Red Hat terms and conditions to use the Developer Sandbox.
In the OpenShift log in form, click DevSandbox to select the authentication method.
DO100A-K1.22-en-2-r00000000 23
Chapter 2 | Running Containerized Applications
24 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Routing traffic from your local machine to your Minikube Kubernetes cluster requires
two steps.
First you must find the local IP assigned to your Ingress add on. The minikube ip
command is the easiest way to find the ingress IP:
<IP-ADDRESS> hello.example.com
Note
If for any reason you need to delete and recreate your Minikube cluster, then review
the IP address assigned to the cluster and update the hosts file accordingly.
For accessing services in the cluster you will use the declared hostname and
potentially any path associated to the ingress. So, if using the hello.example.com
hostname and assuming the application is mapped to the path /myapp, then your
application will be available in the URL http://hello.example.com/myapp.
DO100A-K1.22-en-2-r00000000 25
Chapter 2 | Running Containerized Applications
To get the wildcard domain, remove from the API URL the https://
console-openshift-console, and replace api by apps. For
example, the wildcard domain for the Console URL https://console-
openshift-console.apps.sandbox.x8i5.p1.openshiftapps.com is
apps.sandbox.x8i5.p1.openshiftapps.com.
To get the wildcard domain, remove from the API URL the https://, the
:6443, and change api to apps. For example, the wildcard domain for the API
URL https://api.sandbox.x8i5.p1.openshiftapps.com:6443 is
apps.sandbox.x8i5.p1.openshiftapps.com.
To get the wildcard domain, remove the first part of the hostname, that is everything
before the first period. For example, the wildcard domain for the hostname
example-username-dev.apps.sandbox.x8i5.p1.openshiftapps.com is
apps.sandbox.x8i5.p1.openshiftapps.com.
Once you know the wildcard domain for your Developer Sandbox cluster, use it to
generate a sub-domain to be used by your services. Remember that sub-domains
must be unique for the shared Developer Sandbox cluster. One method for creating
a unique sub-domain is to compose it in the format of <DEPLOYMENT-NAME>-
<NAMESPACE-NAME>.<WILDCARD-DOMAIN>.
So, if using the apps.sandbox.x8i5.p1.openshiftapps.com wildcard
domain and assuming a deployment named hello in a namespace named
username-dev then you can compose your application hostname as hello-
username-dev.apps.sandbox.x8i5.p1.openshiftapps.com.
Assuming the application is mapped to the path /myapp, then your
application will be available in the URL http://hello-username-
dev.apps.sandbox.x8i5.p1.openshiftapps.com/myapp.
Finish
26 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Introducing Kubectl
Objectives
After completing this section, you should be able to review the basic usage of the kubectl
command and understand how to connect to your Kubernetes cluster by using the CLI.
Introducing kubectl
The kubectl tool is a Kubernetes command-line tool that allows you to interact with your
Kubernetes cluster. It provides an easy way to perform tasks such as creating resources or
redirecting cluster traffic. The kubectl tool is available for the three main operating systems
(Linux, Windows and macOS).
For example, the following command displays the kubectl and Kubernetes version.
For example, the following sample sets the KUBECONFIG to the file /tmp/config.
All the commands related to the kubectl configuration are of the form:
If you want to see what the configuration file contains, then you can use the following command.
• Cluster: the URL for the API of a Kubernetes cluster. This URL identifies the cluster itself.
DO100A-K1.22-en-2-r00000000 27
Chapter 2 | Running Containerized Applications
• Context: puts together a cluster (the API URL) and a user (who is connecting to that cluster).
For example, you might have two contexts that are using different clusters but the same user.
Defining Clusters
It is often necessary to work with multiple clusters, so kubectl can hold the information of several
Kubernetes clusters. In relation to the configuration for kubectl, a cluster is just the URL of the
API of the Kubernetes cluster. The kubectl config set-cluster command allows you to
create a new cluster connection by using the API URL.
For example, the following command creates a new cluster connection named my-cluster with
server 127.0.0.1:8087.
Defining Users
The cluster configuration tells kubectl where the Kubernetes cluster is. The user configuration
identifies who connects to the cluster. To connect to the cluster, it is necessary to provide an
authentication method. There are several options to authenticate with the cluster:
• Using a token
The following command creates a new user named my-user with the token Py93bt12mT.
The following command creates a new user named my-user with username kubernetes-
username and password kubernetes-password.
• Using certificates
The following command creates a new user named my-user with a certificate redhat-
certificate.crt and a key redhat-key.key.
28 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Defining Contexts
A context puts together a cluster and a user. Kubectl uses both to connect and authenticate
against a Kubernetes cluster.
For example, the following command creates a new context by using a cluster named my-cluster
and a user named my-user.
In a kubectl context, it is possible to set a namespace. If provided, then any command would
be executed in that namespace. The following command creates a context that points to the
redhat-dev namespace.
Once a context has been created, you can select it by using the use-context command.
After executing the previous command, further kubectl commands will use the my-cluster
context and, therefore, the cluster and user associated to that context.
You can also list the contexts available in the configuration by using the get-contexts option.
The * in the CURRENT column indicates the context that you are currently using.
Another way of checking the current context is by using the current-context option.
DO100A-K1.22-en-2-r00000000 29
Chapter 2 | Running Containerized Applications
For example, kubectl get pods will display all pods in the current namespace.
If you want to display just the information for one pod, then add the pod's name to the previous
command.
You can use this command to display other resources (services, jobs, ingresses…).
Note
Use the command kubectl api-resources to display all resource types that
you can create.
For example, kubectl delete pod example1 deletes the pod named example1.
You can use this command to delete other resources (services, jobs, ingresses…).
The apply command allows you to create, update or delete resources from a manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
30 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
If the snippet was in a file named deployment.yml, then you could use apply to create the
deployment. Note that the -f option is used to indicate the file.
References
Overview of Kubectl
https://kubernetes.io/docs/reference/kubectl/overview/
Kubectl Command Reference
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
DO100A-K1.22-en-2-r00000000 31
Chapter 2 | Running Containerized Applications
Guided Exercise
Outcomes
You should be able to:
• Install kubectl
• Connect to the OpenShift Developer Sandbox (if you are using the OpenShift Developer
Sandbox)
Instructions
The installation procedure of kubectl depends on your operating system.
• Copy the binary to your PATH and make sure it has executable permissions.
32 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Transaction Summary
================================================================================
Install 1 Package
...output omitted...
• Give the binary file executable permissions. Move the binary file executable to your
PATH.
DO100A-K1.22-en-2-r00000000 33
Chapter 2 | Running Containerized Applications
Note
If you have previously installed Minikube with homebrew, kubectl should already be
installed in your computer. You can skip the installation step and directly verify that
kubectl has been installed correctly.
• Create a new folder, such as C:\kube, to use as the destination directory of the
kubectl binary download.
– In the search box on the taskbar, type env, and select Edit the system
environment variables from the search results.
– Under the System variables section, select the row containing Path and
click Edit. This will open the Edit environment variable screen.
– Click New and type the full path of the folder containing the kubectl.exe (for
example, C:\kube).
34 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
• Click Code and then click Download ZIP. A ZIP file with the repository content is
downloaded.
Note
If you want to recover full access over your cluster, then you can change the kubectl
context to the default Minikube context, minikube. Use the command kubectl
config use-context minikube.
If you run the OpenShift Developer Sandbox script, it will configure kubectl to run
commands against the Openshift Developer Sandbox cluster. The script will ask you to
provide some information such as cluster url, username or token.
In your command-line terminal, move to the DO100x-apps directory and run the
script located at ./setup/operating-system/setup.sh. Replace operating-
DO100A-K1.22-en-2-r00000000 35
Chapter 2 | Running Containerized Applications
system for linux if you are using Linux. Use macos if you are using macOS. Make
sure the script has executable permissions.
• Windows
• Open a web browser and navigate to the OpenShift Developer Sandbox website.
Log in with your username and password.
• Click on your username in the upper right pane of the screen. A dropdown menu
opens.
• In the dropdown menu, click Copy login command. A new tab opens, log in again
with your account if necessary by clicking DevSanbox.
36 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
• The token you must provide in the script shows in your web browser.
• Keep these values. You will be asked for them in the script.
Run the appropiate script. The following instructions will depend on your operating
system.
In your command-line terminal, move to the DO100x-apps directory and run the
script located at ./setup/operating-system/setup-sandbox.sh. Replace
DO100A-K1.22-en-2-r00000000 37
Chapter 2 | Running Containerized Applications
operating-system for linux if you are using Linux. Use macos if you are using
macOS. Make sure the script has executable permissions.
• Windows
Finish
38 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Objectives
After completing this section, you should be able to execute a pre-built application in your
Kubernetes cluster and review the resources related to the process.
The following example command creates a new pod named myname that uses the container image
referenced by myimage.
Recent versions of kubectl run can only create new pods. For example, older example uses of
this command might include a --replicas option, which has been removed.
Important
Use kubectl run to create pods for quick tests and experimentation.
Creating Resources
The kubectl create command creates new resources within the Kubernetes cluster. You must
specify the name and type of the resource, along with any information required for that resource
type.
You can specify --dry-run=client to prevent the creation of the object within the cluster. By
combining this with the output type option, you can generate resource definitions.
For example, the following command outputs the YAML definition of a new deployment resource
named webserver, by using the nginx image.
DO100A-K1.22-en-2-r00000000 39
Chapter 2 | Running Containerized Applications
matchLabels:
app: webserver
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: webserver
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
You can save this output to a file to actually create the object later. Reference the file by
specifying it with the -f option.
For example, the following command creates a new resource using the definition found in
mydef.yml:
If you are familiar with certain variants of SQL syntax, then kubectl create is comparable to
INSERT whereas kubectl apply is akin to UPSERT.
At a minimum, this command requires the name of the pod and the command to execute. For
example, the following command will execute ls within the running pod named mypod.
The -- separates the parts of the command intended for Kubernetes itself from the command
that should be passed to and executed within the container.
40 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Notice the addition of the --stdin and --tty options. These are necessary to ensure input and
output are forwarded correctly to the interactive shell within the container.
References
Remove kubectl run generators PR
https://github.com/kubernetes/kubernetes/pull/87077
Get a Shell to a Running Container|Kubernetes
https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-
container/
kubectl Cheat Sheet
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
DO100A-K1.22-en-2-r00000000 41
Chapter 2 | Running Containerized Applications
Guided Exercise
Outcomes
You should be able to:
Note
You do not need to understand Kubernetes namespaces to do this exercise,
as they are solely used as an example resource.
Ensure your kubectl context refers to the user-dev namespace. Use the kubectl
config set-context --current --namespace=user-dev command to switch to
the appropriate namespace.
Instructions
1. Use kubectl run and kubectl exec to create a new pod and attach a shell session to
it.
1.1. Create a new pod named webserver that uses the httpd container image.
Note
This course uses the backslash character (\) to break long commands. On Linux and
macOS, you can use the line breaks.
On Windows, use the backtick character (`) to break long commands. Alternatively,
do not break long commands.
Refer to Orientation to the Classroom Environment for more information about long
commands.
42 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
1.4. View the contents of the httpd configuration file within the pod.
[root@webserver:/#] exit
2. Create a pod resource definition file and use that file to create another pod in your cluster.
2.1. Create a new file named probes-pod.yml. Add the following resource manifest to
the file.
apiVersion: v1
kind: Pod
metadata:
name: probes
labels:
app: probes
namespace: user-dev
spec:
containers:
- name: probes
image: 'quay.io/redhattraining/do100-probes:external'
ports:
- containerPort: 8080
DO100A-K1.22-en-2-r00000000 43
Chapter 2 | Running Containerized Applications
2.2. Use the kubectl create command to create a new pod from the resource
manifest file.
3. Modify the metadata.labels.app field of the pod manifest and apply the changes.
apiVersion: v1
kind: Pod
metadata:
name: probes
labels:
app: do100-probes
...output omitted...
3.2. Attempt to update the pod by using the kubectl create command.
Notice the error. Because you have previously created the pod, you can not use
kubectl create.
44 DO100A-K1.22-en-2-r00000000
Chapter 2 | Running Containerized Applications
Note
The preceding usage of kubectl apply produces a warning that the
kubectl.kubernetes.io/last-applied-configuration annotation is
missing. In most scenarios, this can be safely ignored.
Ideally, to use kubectl apply in this precise manner, you should use the --save-
config option with kubectl create.
3.4. Verify that the label has been updated by using the kubectl describe pod.
Finish
Delete the pod and namespace to clean your cluster.
DO100A-K1.22-en-2-r00000000 45
Chapter 2 | Running Containerized Applications
Summary
In this chapter, you learned:
• Different application and organizational needs determine how you should run and configure your
Kubernetes cluster.
• Kubernetes includes extensions to provide additional cluster functionality, such as storage and
ingress.
• The command-line tool kubectl is the main way to interact and configure a Kubernetes cluster.
46 DO100A-K1.22-en-2-r00000000