CCS

Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 111

Chapter 1: Security Principles

•• 1: Understand the Security Concepts of Information Assurance

•• 2: Understand the Risk Management Process

•• 3: Understand Security Controls

•• 4: Understand Governance Elements and Processes

•• 5: Understand ISC2 Code of Ethics

Chapter 2: Incident Response, Business Continuity and Disaster Recovery Concepts


•• 1: Understand Incident Response

•• 2: Understand Business Continuity

•• 3: Understand Disaster Recovery

Chapter 3: Access Controls Concepts


•• 1: Understand Access Control Concepts

•• 2: Understand Physical Access Controls

•• 3: Understand Logical Access Controls

Chapter 4: Network Security


•• 1: Understand Computer Networking

•• 2: Understand Network (Cyber) Threats and Attacks

•• 3: Understand Network Security Infrastructure

Chapter 5: Security Operations


•• 1: Understand Data Security

•• 2: Understand System Hardening

•• 3: Understand Best Practice Security Policies


•• 4: Understand Security Awareness Training

Technology Requirements:
The CC eTextbook uses VitalSource eReader, which will allow you to view materials on multiple devices and

platforms, online and offline.

The following may be among system requirements to access your eTextbook.

•A stable and continuous internet connection.

Hardware Specifications
•• Processor 2 GHz +

•• RAM 4 GB +

•• Monitor minimum resolution (1024 x 768)

•• Video Card

•• Keyboard and Mouse or other assistive technology


Chapter 1: Security Principles

Chapter 1 Overview
Learning Objectives

Domain 1: Security Principles

After completing this chapter, the participant will be able to:

Discuss the foundational concepts of cybersecurity principles.

Recognize foundational security concepts of information assurance.

Define risk management terminology and summarize the process.

Relate risk management to personal or professional practices.

Classify types of security controls.

Distinguish between policies, procedures, standards, regulations and laws.

Demonstrate the relationship among governance elements

Analyze appropriate outcomes according to the canons of the ISC2 Code of Ethics when given examples. L1.6.1

Practice the terminology and review security principles.

• 1: Understand the Security Concepts of Information Assurance

The CIA Triad

To define security, it has become common to use Confidentiality, Integrity and Availability, also known as the

CIA triad. The purpose of these terms is to describe security using relevant and meaningful words that make

security more understandable to management and users and define its purpose.
CIA triad
Confidentiality
Confidentiality relates to permitting authorized access to information, while at the same time protecting
information from improper disclosure

Integrity
Integrity is the property of information whereby it is recorded, used and maintained in a way that
ensures its completeness, accuracy, internal consistency and usefulness for a stated purpose

Availability
Availability means that systems and data are accessible at the time users need them.

Confidentiality

Confidentiality is a difficult balance to achieve when many system users are guests or
customers, and it is not known if they are accessing the system from a
compromised machine or vulnerable mobile application. So, the security professional’s
obligation is to regulate access—protect the data that needs protection, yet permit access
to authorized individuals.

Personally Identifiable Information (PII) is a term related to the area of confidentiality.

It pertains to any data about an individual that could be used to identify them. Other

terms related to confidentiality are protected health information (PHI) , which is

information regarding one’s health status, and classified or sensitive information,

which includes trade secrets, research, business plans and intellectual property.

Another useful definition is sensitivity, which is a measure of the importance assigned to

information by its owner, or the purpose of denoting its need for protection. Sensitive
information is information that if improperly disclosed (confidentiality) or modified

(integrity) would harm an organization or individual. In many cases, sensitivity is related

to the harm to external stakeholders; that is, people or organizations that may not be a

part of the organization that processes or uses the information

Integrity measures the degree to which something is whole and complete, internally

consistent and correct. The concept of integrity applies to:

•information or data
•systems and processes for business operations
•organizations
•people and their actions
Data integrity is the assurance that data has not been altered in an unauthorized

manner. This requires the protection of the data in systems and during processing to

ensure that it is free from improper modification, errors or loss of information and

is recorded, used and maintained in a way that ensures its completeness. Data integrity

covers data in storage, during processing and while in transit.

Information must be accurate, internally consistent and useful for a stated purpose. The

internal consistency of information ensures that information is correct on all related

systems so that it is displayed and stored in the same way on all systems. Consistency, as

part of data integrity, requires that all instances of the data be identical in form, content

and meaning.
System integrity refers to the maintenance of a known good configuration and

expected operational function as the system processes the information. Ensuring

integrity begins with an awareness of state, which is the current condition of the system.

Specifically, this awareness concerns the ability to document and understand the state of

data or a system at a certain point, creating a baseline. For example, a baseline can refer

to the current state of the information—whether it is protected. Then, to preserve that

state, the information must always continue to be protected through a transaction.

Going forward from that baseline, the integrity of the data or the system can always be

ascertained by comparing the baseline with the current state. If the two match, then the

integrity of the data or the system is intact; if the two do not match, then the integrity of

the data or the system has been compromised. Integrity is a primary factor in the

reliability of information and systems.

The need to safeguard information and system integrity may be dictated by laws and

regulations. Often, it is dictated by the needs of the organization to access and use

reliable, accurate information.


Availability can be defined as (1) timely and reliable access to information and the ability

to use it, and (2) for authorized users, timely and reliable access to data and information

services.

The core concept of availability is that data is accessible to authorized users when and

where it is needed and in the form and format required. This does not mean that data or

systems are available 100% of the time. Instead, the systems and data meet the

requirements of the business for timely and reliable access.

Some systems and data are far more critical than others, so the

security professional must ensure that the appropriate levels of availability are provided.

This requires consultation with the involved business to ensure that critical systems are

identified and available. Availability is often associated with the term criticality, because

it represents the importance an organization gives to data or an information system in

performing its operations or achieving its mission.

Authentication

When users have stated their identity, it is necessary to validate that they are the rightful owners of that

identity. This process of verifying or proving the user’s identification is known as authentication. Simply put,

authentication is a process to prove the identity of the requestor.

There are three common methods of authentication:

• Something you know: Passwords or paraphrases


• Something you have: Tokens, memory cards, smart cards
• Something you are: Biometrics , measurable characteristics
Methods of Authentication
There are two types of authentication. Using only one of the methods of authentication stated previously is

known as single-factor authentication (SFA) . Granting users access only after successfully demonstrating or

displaying two or more of these methods is known as multi-factor authentication (MFA) .

Common best practice is to implement at least two of the three common techniques for authentication:

• Knowledge-based
• Token-based
• Characteristic-based
Knowledge-based authentication uses a passphrase or secret code to differentiate between an authorized

and unauthorized user. If you have selected a personal identification number (PIN), created a password or some

other secret value that only you know, then you have experienced knowledge-based authentication. The

problem with using this type of authentication alone is that it is often vulnerable to a variety of attacks. For

example, the help desk might receive a call to reset a user’s password. The challenge is ensuring that the

password is reset only for the correct user and not someone else pretending to be that user. For better security,

a second or third form of authentication that is based on a token or characteristic would be required prior to

resetting the password. The combined use of a user ID and a password consists of two things that are known,

and because it does not meet the requirement of using two or more of the authentication methods stated, it is

not considered MFA.

Non-repudiation
Non-repudiation is a legal term and is defined as the protection against an individual falsely denying having

performed a particular action. It provides the capability to determine whether a given individual took a

particular action, such as created information, approved information or sent or received a message.

In today’s world of e-commerce and electronic transactions, there are opportunities for the impersonation of

others or denial of an action, such as making a purchase online and later denying it. It is important that all

participants trust online transactions. Non-repudiation methodologies ensure that people are held responsible

for transactions they conducte


Privacy
Privacy is the right of an individual to control the distribution of information about themselves. While security

and privacy both focus on the protection of personal and sensitive data, there is a difference between them.

With the increasing rate at which data is collected and digitally stored across all industries, the push for privacy

legislation and compliance with existing policies steadily grows. In today’s global economy, privacy legislation

and regulations on privacy and data protection can impact corporations and industries regardless of physical

location. Global privacy is an especially crucial issue when considering requirements regarding the collection

and security of personal information. There are several laws that define privacy and data protection, which

periodically change. Ensuring that protective security measures are in place is not enough to meet privacy
regulations or to protect a company from incurring penalties or fines from mishandling, misuse, or improper

protection of personal or private information. An example of a law with multinational implications is the

European Union’s General Data Protection Regulation (GDPR) which applies to all organizations, foreign or

domestic, doing business in the EU or any persons in the EU. Companies operating or doing business within the

United States may also fall under several state legislations that regulate the collection and use of consumer

data and privacy. Likewise, member nations of the EU enact laws to put GDPR into practice and sometimes add

more stringent requirements. These laws, including national- and state-level laws, dictate that any

entity anywhere in the world handling the private data of people in a particular legal jurisdiction must abide by

its privacy requirements. As a member of an organization's data protection team, you will not be required to

interpret these laws, but you will need an understanding of how they apply to your organization.

Understand the Risk Management Process


Domain D1.2.1, D1.2.2

Module Objectives
L1.2.1 Define risk management terminology and summarize the process.
L1.2.2 Relate risk management to personal or professional practices.
Risks and security-related issues represent an ongoing concern of businesses as well as the field of
cybersecurity, but far too often organizations fail to proactively manage risk. Assessing and analyzing risk
should be a continuous and comprehensive exercise in any organization. As a member of an organization’s
security team, you will work through risk assessment, analysis, mitigation, remediation and communication.
There are many frameworks and models used to facilitate the risk management process, and each
organization makes its own determination of what constitutes risk and the level of risk it is willing to accept.
However, there are commonalities among the terms, concepts and skills needed to measure and manage risk.
This module gets you started by presenting foundational terminology and introducing you to the risk
management process.
First, a definition of risk is a measure of the extent to which an entity is threatened by a potential
circumstance or event. It is often expressed as a combination of:
1.the adverse impacts that would arise if the circumstance or event occurs, and
2.the likelihood of occurrence.
Information security risk reflects the potential adverse impacts that result from the possibility of
unauthorized access, use, disclosure, disruption, modification or destruction of information and/or
information systems. This definition represents that risk is associated with threats, impact and likelihood, and
it also indicates that IT risk is a subset of business risk.

Introduction to Risk Management

Narrator:
Information assurance and cybersecurity are greatly involved with the risk management process. The
level of cybersecurity required depends on the level of risk the entity is willing to accept; that is, the
potential consequences of what's going on in our environment.
Once we evaluate this risk, then we will implement security controls to mitigate the risk to the level
that we find acceptable.

Risks can be from cyberattacks, such as malware, social engineering, or denial-of-service attacks,
(or) from other situations that affect our environment, such as fire, violent crime, or natural
disasters.
With well-designed risk management technologies, we can recognize vulnerabilities and threats, and
calculate the likelihood and the potential impact of each threat.

Importance of Risk Management

What do we mean when we say threats and vulnerabilities?


A vulnerability is a gap or weakness in an organization’s protection of its valuable assets, including
information. A threat is something or someone that aims to exploit a vulnerability to gain unauthorized
access. By exploiting a vulnerability, the threat can harm an asset.
For example, a natural disaster, such as a major storm, poses a threat to the utility power supply, which
is vulnerable to flooding. The IT environment where production takes place is an asset. If the utility
power supply is cut off by a storm, the asset might be made unavailable, because the IT components
won’t work without power. Our job is to evaluate how likely it is that an event will take place and take
appropriate actions to mitigate the risk.

Risk Management Terminology

Vulnerability is a gap or weakness in an organization’s protection of its valuable assets including


information .
Threat is something or someone that aims to exploit a vulnerability to gain unauthorized access.
Risk Management Terminology
Security professionals use their knowledge and skills to examine operational risk management, determine how

to use risk data effectively, work cross-functionally and report actionable information and findings to the

stakeholders concerned. Terms such as threats, vulnerabilities and assets are familiar to most cybersecurity

professionals.

• An asset is something in need of protection.


• A vulnerability is a gap or weakness in those protection efforts.
• A threat is something or someone that aims to exploit a vulnerability to thwart protection efforts.
Risk is the intersection of these terms. Let's look at them more closely.
Threats

A threats is a person or thing takes action to exploit (or make use of )a target organization's system

vulnerabilities, as part of achieving or furthering its goal or objectives. To better understand threats,consider

the following scenario:

In the context of cybersecurity, typical threat actors include the following:

Narrator: Tourists are popular targets for pickpockets. The existence of pickpockets in a crowded tourist spot
is a threat to the people gathered there. That threat applies to everyone in the vicinity, even other pickpockets.
If you are in the vicinity and the pickpocket has identified you as a target, you are facing a threat actor whether
you know it or not. The approach and technique taken by the pickpocket is their threat vector.

Insiders (either deliberately, by simple human error, or by gross incompetence).

Outside individuals or informal groups (either planned or opportunistic, discovering vulnerability).

Formal entities that are nonpolitical (such as business competitors and cybercriminals).

Formal entities that are political (such as terrorists, nation-states, and hacktivists).

Intelligence or information gatherers (could be any of the above).

Technology (such as free-running bots and artificial intelligence , which could be part of any of the above).

*Threat Vector: The means by which a threat actor carries out their objectives.

Vulnerabilities
A vulnerability is an inherent weakness or flaw in a system or component, which, if triggered or acted
upon, could cause a risk event to occur. Consider the pickpocket scenario from below.
An organization’s security team strives to decrease its vulnerability. To do so, they view their
organization with the eyes of the threat actor, asking themselves, “Why would we be an attractive
target?” The answers might provide steps to take that will discourage threat actors, cause them to look
elsewhere or simply make it more difficult to launch an attack successfully. For example, to protect
yourself from the pickpocket, you could carry your wallet in an inside pocket instead of the back pant
pocket or behave alertly instead of ignoring your surroundings. Managing vulnerabilities starts with
one simple step: Learn what they are.

Narrator: Let's say the pick pocket chooses you as a target because they see that it will be easier or
more profitable to steal from you. Maybe you are distracted, have jewelry that is easy to snatch, or
appear weak and less likely to put up a struggle. In other words, you appear more vulnerable than the
other tourists and the pick pocket feels that they can exploit that vulnerability or weakness.

Likelihood
When determining an organization’s vulnerabilities, the security team will consider the probability,

or likelihood , of a potential vulnerability being exploited within the construct of the organization’s threat

environment. Likelihood of occurrence is a weighted factor based on a subjective analysis of the probability

that a given threat or set of threats is capable of exploiting a given vulnerability or set of vulnerabilities.

Finally, the security team will consider the likely results if a threat is realized and an event occurs.

Impact is the magnitude of harm that can be expected to result from the consequences of unauthorized

disclosure of information, unauthorized modification of information, unauthorized destruction of information,

or loss of information or information system availability.

Think about the impact and the chain of reaction that can result when an event occurs by revisiting the

pickpocket scenario:

Narrator: How do the pickpocket’s actions affect your ability to continue your journey? If you appear
to be a weak target and the pickpocket chooses to take your money by brute force, will you be able to
obtain more cash to complete your vacation or even return home? The downstream impact must also
be considered. What if you are injured and require medical treatment or even hospitalization? Impact
does not often stop with the incident itself.
Risk Identification
How do you identify risks?

Do you walk down the street watching out for traffic and looking for puddles on the ground?

Maybe you’ve noticed loose wires at your desk or water on the office floor?

If you’re already on the lookout for risks, you’ll fit with other security professionals who know it’s necessary to

dig deeper to find possible problems.

In the world of cyber, identifying risks is not a one-and-done activity. It’s a recurring process of identifying

different possible risks, characterizing them and then estimating their potential for disrupting the organization.

It involves looking at your unique company and analyzing its unique situation. Security professionals know their

organization’s strategic, tactical and operational plans.

Takeaways to remember about risk identification:

• Identify risk to communicate it clearly.


• Employees at all levels of the organization are responsible for identifying risk.
• Identify risk to protect against it.
As a security professional, you are likely to assist in risk assessment at a system level, focusing on process,

control, monitoring or incident response and recovery activities. If you’re working with a smaller organization,

or one that lacks any kind of risk management and mitigation plan and program, you might have the

opportunity to help fill that planning void.

When risks have been identified, it is time to prioritize and analyze core risks
through qualitative risk analysis and/or quantitative risk analysis. This is necessary
to determine root cause and narrow down apparent risks and core risks. Security
professionals work with their teams to conduct both qualitative and
quantitative analysis.

Understanding the organization’s overall mission and the functions that support the

mission helps to place risks in context, determine the root causes and prioritize the
assessment and analysis of these items. In most cases, management will provide

direction for using the findings of the risk assessment to determine a prioritized set of

risk-response actions.

One effective method to prioritize risk is to use a risk matrix, which helps identify priority

as the intersection of likelihood of occurrence and impact. It also gives the team a

common language to use with management when determining the final priorities. For

example, a low likelihood and a low impact might result in a low priority, while an

incident with a high likelihood and high impact will result in a high priority. Assignment of

priority may relate to business priorities, the cost of mitigating a risk or the potential for

loss if an incident occurs.


Chapter 3: Access Control Concepts

Module 1: Understand Access Control Concepts (D3.1, D3.2)


Module 2: Understand Physical Access Controls (D3.1)
Module 3: Understand Logical Access Controls (D3.2)

Chapter 3 Overview

Let’s take a more detailed look at the types of access control that every information

security professional should be familiar with. We will discuss both physical and logical

controls and how they are combined to strengthen the overall security of an

organization. This is where we describe who gets access to what, why access is

necessary, and how that access is managed.


Learning Objectives

Domain 3: Access Control Concepts Objectives

After completing this chapter, the participant will be able to:

Select access controls that are appropriate in a given scenario.

Relate access control concepts and processes to given scenarios.

Compare various physical access controls.

Describe logical access controls.L3.4.1

Practice the terminology of access controls and review concepts of access controls.

Chapter at a Glance
While working through Chapter 3, Access Controls Concepts, make sure to:

• Complete the Knowledge Check: Roles and Permissions


• Complete the Knowledge Check: Privileged Access Management
• Complete the Knowledge Check: Physical Access Controls
• Complete the Knowledge Check: Reading Users’ Credentials
• View the Chapter 3 Summary
• Take the Chapter 3 Quiz
• View the Terms and Definitions

Access Control Concepts


Relate access control concepts and processes given scenarios.

Manny: In the last module, we covered all the planning that goes into incident response and disaster
recovery. But how do security professionals protect information from falling into the wrong hands in
the first place?

Tasha: That's the topic of our next module. Information security professionals are like gatekeepers,
controlling who gets access to which systems and data, why they get certain permissions or not, and
how. Let's find out more about these access control concepts.
What is Security Control?
A control is a safeguard or countermeasure designed to preserve Confidentiality,

Integrity and Availability of data. This, of course, is the CIA Triad.

Access control involves limiting what objects can be available to what subjects according

to what rules. We will further define objects, subjects and rules later in this chapter.

For now, remember these three words, as they are the foundation upon which we will

build.

One brief example of a control is a firewall, which is included in a system or network to

prevent something from the outside from coming in and disturbing or compromising the

environment. The firewall can also prevent information on the inside from going out into

the Web where it could be viewed or accessed by unauthorized individuals.

Controls Overview
It can be argued that access controls are the heart of an information security program.

Earlier in this course we looked at security principles through foundations of risk management, governance,

incident response, business continuity and disaster recovery.

But in the end, security all comes down to, “who can get access to organizational assets (buildings, data,

systems, etc.) and what can they do when they get access?”

Access controls are not just about restricting access to information systems and data, but also about allowing

access. It is about granting the appropriate level of access to authorized personnel and processes and denying

access to unauthorized functions or individuals.

Access is based on three elements:


• Subjects (Who)

• Objects (What)

• Rules (How and When)

subject (who)
A subject can be defined as any entity that requests access to our assets. The entity
requesting access may be a user, a client, a process or a program, for example. A subject
is the initiator of a request for service; therefore, a subject is referred to as “active.”

A subject:

• Is a user, a process, a procedure, a client (or a server), a program, a device such as


an endpoint, workstation, smartphone or removable storage device with onboard
firmware.

• Is active: It initiates a request for access to resources or services.

• Requests a service from an object.

• Should have a level of clearance (permissions) that relates to its ability to


successfully access services or resources.

Object (What)

By definition, anything that a subject attempts to access is referred to as an object. An


object is a device, process, person, user, program, server, client or other entity that
responds to a request for service. Whereas a subject is active in that it initiates a request
for a service, an object is passive in that it takes no action until called upon by a subject.
When requested, an object will respond to the request it receives, and if the request is
wrong, the response will probably not be what the subject really wanted either.
Note that by definition, objects do not contain their own access control logic. Objects are
passive, not active (in access control terms), and must be protected from unauthorized
access by some other layers of functionality in the system, such as the integrated identity
and access management system. An object has an owner, and the owner has the right to
determine who or what should be allowed access to their object. Quite often the rules of
access are recorded in a rule base or access control list.

An object:

Is a building, a computer, a file, a database, a printer or scanner, a server, a


communications resource, a block of memory, an input/output port, a person, a software
task, thread or process.

• Is anything that provides service to a user.

• Is passive.

• Responds to a request.

• May have a classification.

Rules (How and When)

An access rule is an instruction developed to allow or deny access to an object by


comparing the validated identity of the subject to an access control list. One example of a
rule is a firewall access control list. By default, firewalls deny access from any address to
any address, on any port. For a firewall to be useful, however, it needs more rules. A rule
might be added to allow access from the inside network to the outside network. Here we
are describing a rule that allows access to the object “outside network” by the subject
having the address “inside network.” In another example, when a user (subject) attempts
to access a file (object), a rule validates the level of access, if any, the user should have to
that file. To do this, the rule will contain or reference a set of attributes that define what
level of access has been determined to be appropriate.

A rule can:
• Compare multiple attributes to determine appropriate access.

• Allow access to an object.

• Define how much access is allowed.

• Deny access to an object.

• Apply time-based access.

Object
File 1 File 2 File 3 File 4
Own Own
User 1 Read Read
Write Write
Red Own write Own
User 2 Read Read
Subject Write Write Rule
Own Write Read
User 3
Read
Write
Red Own Own Own
User 4 Read Read
Write Write
Controls Assessments
Risk reduction depends on the effectiveness of the control. It must apply to the current

situation and adapt to a changing environment.

Consider a scenario where part of an office building is being repurposed for use as a

secure storage facility. Due to the previous use of the area, there are 5 doors which must

be secured before confidential files can be stored there. When securing a physical

location, there are several things to consider. To keep the information the most secure, it

might be recommended to install biometric scanners on all doors. A site assessment will

determine if all five doors need biometric scanners, or if only one or two doors need

scanners. The remaining doors could be permanently secured, or if the budget permits,

the doors could be removed and replaced with a permanent wall. Most importantly, the

cost of implementing the controls must align with the value of what is being protected. If

multiple doors secured by biometric locks are not necessary, and the access to the area

does not need to be audited, perhaps a simple deadbolt lock on all of the doors will

provide the correct level of control.


Defense in Depth
As you can see, we are not just looking at system access. We are looking at all access
permissions including building access, access to server rooms, access to networks and
applications and utilities. These are all implementations of access control and are part of
a layered defense strategy, also known as defense in depth, developed by an organization.

Defense in depth describes an information security strategy that integrates people,


technology and operations capabilities to establish variable barriers across multiple layers
and missions of the organization. It applies multiple countermeasures in a layered fashion to
fulfill security objectives. Defense in depth should be implemented to prevent or deter a
cyberattack, but it cannot guarantee that an attack will not occur.

A technical example of defense in depth, in which multiple layers of technical controls are
implemented, is when a username and password are required for logging in to your account,
followed by a code sent to your phone to verify your identity. This is a form of multi-factor
authentication using methods on two layers, something you have and something you know.
The combination of the two layers is much more difficult for an adversary to obtain than
either of the authentication codes individually.

A non-technical example, consider the multiple layers of access required to get to the actual
data in a data center. First, a lock on the door provides a physical barrier to access the data
storage devices. Second, a technical access rule prevents access to the data via the network.
Finally, a policy, or administrative control defines the rules that assign access to authorized
individuals.
Defense in Depth Practice
Module 2: Understand Physical Access Controls
Domain D3.1, D3.1.1, D3.1.2

Module Objective

 L3.2.1 Compare various physical access controls.

What Are Physical Security Controls?


Physical access controls are items you can physically touch. They include physical mechanisms deployed to

prevent, monitor, or detect direct contact with systems or areas within a facility.

Examples of physical access controls include security guards, fences, motion detectors, locked doors/gates,

sealed windows, lights, cable protection, laptop locks, badges, swipe cards, guard dogs, cameras,

mantraps/turnstiles, and alarms.

Physical access controls are necessary to protect the assets of a company, including its most important asset,

people. When considering physical access controls, the security of the personnel always comes first, followed

by securing other physical assets.

controls implemented through a tangible mechanism

example include walls,fences,guards,lock,etc.

In orgizations ,many physical control system are linked to technical/logical systems,such


as badge readers connected to door locks
Why Have Physical Security Controls?
Physical access controls include fences, barriers, turnstiles, locks and other features that

prevent unauthorized individuals from entering a physical site, such as a workplace. This

is to protect not only physical assets such as computers from being stolen, but also to

protect the health and safety of the personnel inside.

Types of Physical Access Controls

Many types of physical access control mechanisms can be deployed in an environment to


control, monitor and manage access to a facility.

These range from deterrents to detection mechanisms.

Each area requires unique and focused physical access controls, monitoring and
prevention mechanisms.

The following sections discuss many such mechanisms that may be used to control
access to various areas of a site, including perimeter and internal security.

➔ Badge Systems and Gate Entry

➔Environmental Design

➔Biometrics
Badge Systems and Gate Entry

Physical security controls for human traffic are often done with technologies such as
turnstiles, mantraps and remotely or system-controlled door locks. For the system to
identify an authorized employee, an access control system needs to have some form of
enrollment station used to assign and activate an access control device. Most often, a
badge is produced and issued with the employee’s identifiers, with the enrollment
station giving the employee specific areas that will be accessible. In high-security
environments, enrollment may also include biometric characteristics. In general, an
access control system compares an individual’s badge against a verified database. If
authenticated, the access control system sends output signals allowing authorized
personnel to pass through a gate or a door to a controlled area. The systems are typically
integrated with the organization’s logging systems to document access activity
(authorized and unauthorized)

A range of card types allow the system to be used in a variety of environments. These
cards include:

• Bar code

• Magnetic stripe

• Proximity
• Smart

• Hybrid

Environmental Design
Crime Prevention through Environmental Design (CPTED) approaches the challenge

of creating safer workspaces through passive design elements. This has great

applicability for the information security community as security professionals design,

operate and assess the organizational security environment. Other practices, such as

standards for building construction and data centers, also affect how we implement

controls over our physical environment. Security professionals should be familiar with

these concepts so they can successfully advocate for functional and effective physical

spaces where information is going to be created, processed and stored.

CPTED provides direction to solve the challenges of crime with organizational (people),

mechanical (technology and hardware) and natural design (architectural and circulation

flow) methods. By directing the flow of people, using passive techniques to signal who

should and should not be in a space and providing visibility to otherwise hidden spaces,

the likelihood that someone will commit a crime in that area decreases.

Biometrics
To authenticate a user’s identity, biometrics uses characteristics unique to the individual

seeking access. A biometric authentication solution entails two processes.

•Enrollment—during the enrollment process, the user’s registered biometric code


is either stored in a system or on a smart card that is kept by the user.
•Verification—during the verification process, the user presents their biometric
data to the system so that the biometric data can be compared with the stored
biometric code.
Even though the biometric data may not be secret, it is personally identifiable

information, and the protocol should not reveal it without the user’s consent. Biometrics

takes two primary forms, physiological and behavioral.

Physiological systems measure the characteristics of a person such as a fingerprint, iris

scan (the colored portion around the outside of the pupil in the eye), retinal scan (the

pattern of blood vessels in the back of the eye), palm scan and venous scans that look for

the flow of blood through the veins in the palm. Some biometrics devices combine

processes together—such as checking for pulse and temperature on a fingerprint

scanner—to detect counterfeiting.

Behavioral systems measure how a person acts by measuring voiceprints, signature

dynamics and keystroke dynamics. As a person types, a keystroke dynamics system

measures behavior such as the delay rate (how long a person holds down a key) and

transfer rate (how rapidly a person moves between keys).

Biometric systems are considered highly accurate, but they can be expensive to

implement and maintain because of the cost of purchasing equipment and registering all

users. Users may also be uncomfortable with the use of biometrics, considering them to

be an invasion of privacy or presenting a risk of disclosure of medical information (since

retina scans can disclose medical conditions). A further drawback is the challenge of

sanitization of the devices.


Monitoring
The use of physical access controls and monitoring personnel and equipment entering and leaving as well as

auditing/logging all physical events are primary elements in maintaining overall organizational security.

Monitoring Examples

Select each plus sign hotspot to learn more about each topic.

➔ Cameras

➔ Logs

➔ Alarm systems

➔ Security guards


Cameras

Cameras are normally integrated into the overall security program and centrally
monitored. Cameras provide a flexible method of surveillance and monitoring. They can
be a deterrent to criminal activity, can detect activities if combined with other sensors
and, if recorded, can provide evidence after the activity They are often used in locations
where access is difficult or there is a need for a forensic record.

While cameras provide one tool for monitoring the external perimeter of facilities, other

technologies augment their detection capabilities. A variety of motion sensor

technologies can be effective in exterior locations. These include infrared, microwave and

lasers trained on tuned receivers. Other sensors can be integrated into doors, gates and

turnstiles, and strain-sensitive cables and other vibration sensors can detect if someone

attempts to scale a fence. Proper integration of exterior or perimeter sensors will alert an

organization to any intruders attempting to gain access across open space or attempting

to breach the fence line.

Logs
In this section, we are concentrating on the use of physical logs, such as a sign-in sheet
maintained by a security guard, or even a log created by an electronic system that
manages physical access. Electronic systems that capture system and security logs within
software will be covered in another section.

A log is a record of events that have occurred. Physical security logs are essential to

support business requirements. They should capture and retain information as long as

necessary for legal or business reasons. Because logs may be needed to prove

compliance with regulations and assist in a forensic investigation, the logs must be

protected from manipulation. Logs may also contain sensitive data about customers or

users and should be protected from unauthorized disclosure.


The organization should have a policy to review logs regularly as part of their

organization’s security program. As part of the organization’s log processes, guidelines

for log retention must be established and followed. If the organizational policy states to

retain standard log files for only six months, that is all the organization should have.

A log anomaly is anything out of the ordinary. Identifying log anomalies is often the first

step in identifying security-related issues, both during an audit and during routine

monitoring. Some anomalies will be glaringly obvious: for example, gaps in date/time

stamps or account lockouts. Others will be harder to detect, such as someone trying to

write data to a protected directory. Although it may seem that logging everything so you

would not miss any important data is the best approach, most organizations would soon

drown under the amount of data collected.

Business and legal requirements for log retention will vary among economies, countries

and industries. Some businesses will have no requirements for data retention. Others

are mandated by the nature of their business or by business partners to comply with

certain retention data. For example, the Payment Card Industry Data Security Standard

(PCI DSS) requires that businesses retain one year of log data in support of PCI. Some

federal regulations include requirements for data retention as well.

If a business has no business or legal requirements to retain log data, how long should

the organization keep it? The first people to ask should be the legal department. Most

legal departments have very specific guidelines for data retention, and those guidelines

may drive the log retention policy.


Alarm systems

Alarm systems are commonly found on doors and windows in homes and office
buildings. In their simplest form, they are designed to alert the appropriate personnel
when a door or window is opened unexpectedly.

For example, an employee may enter a code and/or swipe a badge to open a door, and

that action would not trigger an alarm. Alternatively, if that same door was opened by

brute force without someone entering the correct code or using an authorized badge, an

alarm would be activated.

Another alarm system is a fire alarm, which may be activated by heat or smoke at a

sensor and will likely sound an audible warning to protect human lives in the vicinity. It

will likely also contact local response personnel as well as the closest fire department.

Finally, another common type of alarm system is in the form of a panic button. Once

activated, a panic button will alert the appropriate police or security personnel.

Security guards

Security guards are an effective physical security control. No matter what form of physical access control is
used, a security guard or other monitoring system will discourage a person from masquerading as someone
else or following closely on the heels of another to gain access. This helps prevent theft and abuse of
equipment or information.

Module 3: Understand Logical Access Controls


Domain D3.2, D3.2.3, D3.2.4, D3.2.5

Module Objective

 L3.3.1 Describe logical access controls.


What are Logical Access Controls?
Whereas physical access controls are tangible methods or mechanisms that limit someone from getting access

to an area or asset, logical access controls are electronic methods that limit someone from getting access to

systems, and sometimes even to tangible assets or areas. Types of logical access controls include:

• Passwords
• Biometrics (implemented on a system, such as a smartphone or laptop)
• Badge/token readers connected to a system
These types of electronic tools limit who can get logical access to an asset, even if the person already has

physical access.

➔ Discretionary access control (DAC)

➔ Mandatory access control (MAC)

➔ Role-based access control (RBAC)

➔ Rule-based access control (RuBAC)

Discretionary Access Control (DAC)

Discretionary access control (DAC) is a specific type of access control policy that is
enforced over all subjects and objects in an information system.

In DAC, the policy specifies that a subject who has been granted access to information
can do one or more of the following:

Pass the information to other subjects or objects

Grant its privileges to other subjects

Change security attributes on subjects, objects, information systems or system


components

Choose the security attributes to be associated with newly created or revised objects;
and/or

Change the rules governing access control; mandatory access controls restrict this
capability
Most information systems in the world are DAC systems. In a DAC system, a user who
has access to a file is usually able to share that file with or pass it to someone else. This
grants the user almost the same level of access as the original owner of the file. Rule-
based access control systems are usually a form of DAC.

DAC Example
Discretionary access control systems allow users to establish or change these

permissions on files they create or otherwise have ownership of.

Steve and Aidan, for example, are two users (subjects) in a UNIX environment operating

with DAC in place. Typically, systems will create and maintain a table that maps subjects

to objects, as shown in the image. At each intersection is the set of permissions that a

given subject has for a specific object. Many operating systems, such as Windows and the

whole Unix family tree (including Linux) and iOS, use this type of data structure to make

fast, accurate decisions about authorizing or denying an access request. Note that this

data can be viewed as either rows or columns:

•An object’s access control list shows the total set of subjects who have any
permissions at all for that specific object.
•A subject’s capabilities list shows each object in the system that said subject has
any permissions for.
Access Control List for
Excel File 1

Excel FILE 1 Excel FILE 2


Aidan's
Aidan
Capabilities
Read | Write | eXecute Read | eXecute

List
Steve Read Read | Write

This methodology relies on the discretion of the owner of the access control object to

determine the access control subject’s specific rights. Hence, security of the object is

literally up to the discretion of the object owner. DACs are not very scalable; they rely on

the access control decisions made by each individual object owner, and it can be difficult

to find the source of access control issues when problems occur.


DAC in the Workplace
Most information systems are DAC systems. In a DAC system, a user who has access to a

file is able to share that file with or pass it to someone else. It is at the discretion of the

asset owner whether to grant or revoke access for a user. For access to computer files,

this can be shared file or password protections. For example, if you create a file in an

online file sharing platform you can restrict who sees it. That is up to your discretion. Or

it may be something low-tech and temporary, such as a visitor’s badge provided at the

discretion of the worker at the security desk

Mandatory Access Control (MAC)


A mandatory access control (MAC) policy is one that is uniformly enforced across all subjects and objects

within the boundary of an information system. In simplest terms, this means that only properly designated

security administrators, as trusted subjects, can modify any of the security rules that are established for

subjects and objects within the system. This also means that for all subjects defined by the organization (that is,

known to its integrated identity management and access control system), the organization assigns a subset of

total privileges for a subset of objects, such that the subject is constrained from doing any of the following:

• Passing the information to unauthorized subjects or objects


• Granting its privileges to other subjects
• Changing one or more security attributes on subjects, objects, the information system or system
components
• Choosing the security attributes to be associated with newly created or modified objects
• Changing the rules governing access control
Although MAC sounds very similar to DAC, the primary difference is who can control access. With Mandatory

Access Control, it is mandatory for security administrators to assign access rights or permissions; with

Discretionary Access Control, it is up to the object owner’s discretion.


MAC in the Workplace

Mandatory access control is also determined by the owner of the assets, but on a more

across-the-board basis, with little individual decision-making about who gets access.

For example, at certain government agencies, personnel must have a certain type of

security clearance to get access to certain areas. In general, this level of access is set by

government policy and not by an individual giving permission based on their own

judgment.

Often this is accompanied by separation of duties, where your scope of work is limited

and you do not have access to see information that does not concern you; someone else

handles that information. This separation of duties is also facilitated by role-based access

control, as we will discuss next.

Role-Based Access Control (RBAC)

Role-based access control (RBAC), as the name suggests, sets up user permissions based on roles. Each role

represents users with similar or identical permissions.


RBAC in the Workplace

Role-based access control provides each worker privileges based on what role they have

in the organization. Only Human Resources staff have access to personnel files, for

example; only Finance has access to bank accounts; each manager has access to their

own direct reports and their own department. Very high-level system administrators may

have access to everything; new employees would have very limited access, the minimum

required to do their jobs.


Monitoring these role-based permissions is important, because if you expand one

person’s permissions for a specific reason—say, a junior worker’s permissions might be

expanded so they can temporarily act as the department manager—but you forget to

change their permissions back when the new manager is hired, then the next person to

come in at that junior level might inherit those permissions when it is not appropriate for

them to have them. This is called privilege creep or permissions creep. We discussed this

before, when we were talking about provisioning new users.

Having multiple roles with different combinations of permissions can require close

monitoring to make sure everyone has the access they need to do their jobs and nothing

more. In this world where jobs are ever-changing, this can sometimes be a challenge to

keep track of, especially with extremely granular roles and permissions. Upon hiring or

changing roles, a best practice is to not copy user profiles to new users. It is

recommended that standard roles are established, and new users are created based on

those standards rather than an actual user. That way, new employees start with the

appropriate roles and permissions.


Risk Assessment
Risk assessment is defined as the process of identifying, estimating and prioritizing risks

to an organization’s operations (including its mission, functions, image and reputation),

assets, individuals, other organizations and even the nation. Risk assessment should

result in aligning (or associating) each identified risk resulting from the operation of an

information system with the goals, objectives, assets or processes that the organization

uses, which in turn aligns with or directly supports achieving the organization’s goals and

objectives.

A common risk assessment activity identifies the risk of fire to a building. While there are

many ways to mitigate that risk, the primary goal of a risk assessment is to estimate and

prioritize. For example, fire alarms are the lowest cost and can alert personnel to

evacuate and reduce the risk of personal injury, but they won’t keep a fire from

spreading or causing more damage. Sprinkler systems won’t prevent a fire but can

minimize the amount of damage done. However, while sprinklers in a data center limit

the fire’s spread, it is likely they will destroy all the systems and data on them. A gas-

based system may be the best solution to protect the systems, but it might be cost-

prohibitive. A risk assessment can prioritize these items for management to determine

the method of mitigation that best suits the assets being protected.

The result of the risk assessment process is often documented as a report or

presentation given to management for their use in prioritizing the identified risk(s). This

report is provided to management for review and approval. In some cases, management

may indicate a need for a more in-depth or detailed risk assessment performed by

internal or external resources.


Risk Treatment
Risk treatment relates to making decisions about the best actions to take regarding the identified and

prioritized risk. The decisions made are dependent on the attitude of management toward risk and the

availability — and cost — of risk mitigation. The options commonly used to respond to risk are:

Select each plus sign hotspot to learn more about each topic.

Avoidance
acceptance
Mitigation

Avoidance
Risk avoidance is the decision to attempt to eliminate the risk entirely. This could
include ceasing operation for some or all of the activities of the organization that
are exposed to a particular risk. Organization leadership may choose risk
avoidance when the potential impact of a given risk is too high or if the likelihood
of the risk being realized is simply too great.

Acceptance
Risk acceptance is taking no action to reduce the likelihood of a risk occurring.
Management may opt for conducting the business function that
is associated with the risk without any further action on the part of the
organization, either because the impact or likelihood of occurrence is negligible, or
because the benefit is more than enough to offset that risk.

Mitigation
Risk mitigation is the most common type of risk management and includes taking actions to prevent or
reduce the possibility of a risk event or its impact. Mitigation can involve remediation measures, or
controls, such as security controls, establishing policies, procedures, and standards to minimize adverse
risk. Risk cannot always be mitigated, but mitigations such as safety measures should always be in
place.

Risk Management Processes

Narrator: As we mentioned before, an asset is something that we need to protect. It can be


information, or it can be an actual physical piece of equipment, such as a rack in the server room or a
computer or tablet or even a phone. A vulnerability is a weakness in the system. It can be due to lack
of knowledge, or possibly outdated software. For example, perhaps we don't have a current operating
system, or our awareness training is lacking. A threat is something or someone that could cause harm
once they learn that we have a weakness. For example, if we have a back door open, either logically,
in our website, or even physically in the back office, someone can exploit that weakness and take
advantage of that gap in our defenses to access information. The likelihood or the probability of that
happening depends on the overall environment. In an environment that's extremely secure, such as a
data center or a bank, the likelihood that someone can come in and rob the bank is very low. Whether
they are seeking access through a web browser, or physically into the actual bank, their likelihood of
success is not high because security is very strong. In other situations, where we have fewer levels of
security, the likelihood that the environment can be compromised is much higher. In our daily
accounts, we often only have one username and a password and that is the extent of our defenses.
Anyone who obtains that username and password can gain access; therefore, the likelihood that this
environment can be compromised is very high. As a first step in the risk management process,
organizations need to figure out how much risk they are willing to take. This is called a risk appetite
or risk tolerance. For a very trivial example, if you are a big fan of football or a particular TV
program, you will have a low tolerance for having a power outage during a big game or your favorite
program. You also need to have power when you are trying to access important documents or sites for
your business, so your risk appetite depends on how important that asset is. If your data is extremely
sensitive, you will naturally be extremely averse to having any risk of a breach. To mitigate the risk,
one option is to hire another company with the expertise to help you maintain the security of your
environment. This will help reduce the risk. You would also consider implementing some security
controls, which we will explore shortly. If we don't have the competence or the means to protect
sensitive information, sometimes we need to avoid the risk. This means removing ourselves from a
situation that can result in problems and refraining from initiating risky activities until we achieve a
certain level of comfort with our security. We can also share or transfer the risk by obtaining
cybersecurity insurance, so the insurance company assumes the risk. While it is nearly impossible to
remove all risk, once we have done enough to reduce or transfer the risk, and we are comfortable with
the situation, then we can accept the level of risk that remains.

Governance Elements and Processes

Governance Elements
Any business or organization exists to fulfill a purpose, whether it is to provide raw materials to an industry,

manufacture equipment to build computer hardware, develop software applications, construct buildings or

provide goods and services. To complete the objective requires that decisions are made, rules and practices are

defined, and policies and procedures are in place to guide the organization in its pursuit of achieving its goals

and mission.

When leaders and management implement the systems and structures that the organization will use to achieve

its goals, they are guided by laws and regulations created by governments to enact public policy. Laws and

regulations guide the development of standards, which cultivate policies, which result in procedures.

How are regulations, standards, policies and procedures related? It might help to look at the list in reverse.

• Procedures are the detailed steps to complete a task that support departmental or organizational
policies.
• Policies are put in place by organizational governance, such as executive management, to provide
guidance in all activities to ensure that the organization supports industry standards and regulations.
• Standards are often used by governance teams to provide a framework to introduce policies and
procedures in support of regulations.
• Regulations are commonly issued in the form of laws, usually from government (not to be confused
with governance) and typically carry financial penalties for noncompliance.
Now that we see how they are connected, we’ll look at some details and examples of each.

Regulations and Laws

Regulations and associated fines and penalties can be imposed by governments at the

national, regional or local level. Because regulations and laws can be imposed and

enforced differently in different parts of the world, here are a few examples to connect

the concepts to actual regulations.

Health Insurance Portability and Accountability Act (HIPAA) of 1996 is an example

of a law that governs the use of protected health information (PHI) in the United States.
Violation of the HIPAA rule carries the possibility of fines and/or imprisonment for both

individuals and companies.

General Data Protection Regulation (GDPR) was enacted by the European Union (EU)

to control use of Personally Identifiable Information (PII) of its citizens and those in the

EU. It includes provisions that apply financial penalties to companies who handle data of

EU citizens and those living in the EU even if the company does not have a physical

presence in the EU, giving this regulation an international reach.

Finally, it is common to be subject to regulation on several levels. Multinational

organizations are subject to regulations in more than one nation in addition to multiple

regions and municipalities. Organizations need to consider the regulations that apply to

their business at all levels—national, regional and local—and ensure they are compliant

with the most restrictive regulation.

Standards

Organizations use multiple standards as part of their information systems security

programs, both as compliance documents and as advisories or guidelines.


Standards cover a broad range of issues and ideas and may provide assurance that an

organization is operating with policies and procedures that support regulations and are

widely accepted best practices.

International Organization for Standardization (ISO) develops and publishes

international standards on a variety of technical subjects, including information systems

and information security, as well as encryption standards. ISO solicits input from the

international community of experts to provide input on its standards prior to publishing.

Documents outlining ISO standards may be purchased online.

National Institute of Standards and Technology (NIST) is a United States government

agency under the Department of Commerce and publishes a variety of technical

standards in addition to information technology and information security standards.

Many of the standards issued by NIST are requirements for U.S. government agencies

and are considered recommended standards by industries worldwide. NIST standards

solicit and integrate input from industry and are free to download from the NIST website.

Finally, think about how computers talk to other computers across the globe. People

speak different languages and do not always understand each other. How are computers

able to communicate? Through standards, of course!

Internet Engineering Task Force (IETF), there are standards in communication

protocols that ensure all computers can connect with each other across borders, even

when the operators do not speak the same language.

Institute of Electrical and Electronics Engineers (IEEE) also sets standards for

telecommunications, computer engineering and similar disciplines.


Policies(မူ ဝါဒများ)

Policy is informed by applicable law(s) and specifies which standards and guidelines the

organization will follow. Policy is broad, but not detailed; it establishes context and sets

out strategic direction and priorities. Governance policies are used to moderate and

control decision-making, to ensure compliance when necessary and to guide the creation

and implementation of other policies.

Policies are often written at many levels across the organization. High-level governance

policies are used by senior executives to shape and control decision-making processes.

Other high-level policies direct the behavior and activity of the entire organization as it

moves toward specific or general goals and objectives. Functional areas such as human

resources management, finance and accounting, and security and asset protection

usually have their own sets of policies. Whether imposed by laws and regulations or by

contracts, the need for compliance might also require the development of specific high-

level policies that are documented and assessed for their effective use by the

organization.

Policies are implemented, or carried out, by people; for that, someone must expand the

policies from statements of intent and direction into step-by-step instructions, or

procedures.
Procedures(လု ပ်ထုံ းလု ပ်နည်းများ)

Procedures define the explicit, repeatable activities necessary to accomplish a specific

task or set of tasks. They provide supporting data, decision criteria or other explicit

knowledge needed to perform each task. Procedures can address one-time or infrequent

actions or common, regular occurrences. In addition, procedures establish the

measurement criteria and methods to use to determine whether a task has been

successfully completed. Properly documenting procedures and training personnel on

how to locate and follow them is necessary for deriving the maximum organizational

benefits from procedures.


Chapter 4: Network Security

Chapter 4 Overview
Let’s take a more detailed look at computer networking and securing the network. In

today’s world, the internet connects nearly everyone and everything, and this is

accomplished through networking. While most see computer networking as a positive,

criminals routinely use the internet, and the networking protocols themselves, as

weapons and tools to exploit vulnerabilities and for this reason we must do our best to

secure the network. We will review the basic components of a network, threats and

attacks to the network, and learn how to protect them from attackers. Network security

itself can be a specialty career within cybersecurity; however, all information security

professionals need to understand how networks operate and are exploited to better

secure them.

Learning Objectives

Domain 4: Network Security Objectives

After completing this chapter, the participant will be able to:

L4
Explain the concepts of network security.

L4.1.1

Recognize common networking terms and models.

L4.1.2

Identify common protocols and ports and their secure counterparts.

L4.2.1

Identify types of network (cyber) threats and attacks.

L4.2.2

Discuss common tools used to identify and prevent threats.

L4.3.1

Identify common data center terminology.

L4.3.2

Recognize common cloud service terminology.

L4.3.3

Identify secure network design terminology.

L4.4.1

Practice the terminology of and review network security concepts.

Module 1: Understand Computer Networking

What is Networking
A network is simply two or more computers linked together to share data, information or resources.
To properly establish secure data communications, it is important to explore all of the technologies involved in

computer communications. From hardware and software to protocols and encryption and beyond, there are

many details, standards and procedures to be familiar with.

Types of Networks
There are two basic types of networks:

•Local area network (LAN) - A local area network (LAN) is a network typically
spanning a single floor or building. This is commonly a limited geographical area.
•Wide area network (WAN) - Wide area network (WAN) is the term usually
assigned to the long-distance connections between geographically remote
networks.

Network Devices

Hub

Hubs are used to connect multiple devices in a network. They’re less likely to be seen in
business or corporate networks than in home networks. Hubs are wired devices and are not
as smart as switches or routers.

Switch

Rather than using a hub, you might consider using a switch, or what is also known as an
intelligent hub. Switches are wired devices that know the addresses of the devices
connected to them and route traffic to that port/device rather than retransmitting to all
devices.

Offering greater efficiency for traffic delivery and improving the overall throughput of

data, switches are smarter than hubs, but not as smart as routers. Switches can also

create separate broadcast domains when used to create VLANs, which will be discussed

later.

Router
Routers are used to control traffic flow on networks and are often used to connect similar
networks and control traffic flow between them. Routers can be wired or wireless and can
connect multiple switches. Smarter than hubs and switches, routers determine the most
efficient “route” for the traffic to flow across the network.

Firewall
Firewalls are essential tools in managing and controlling network traffic and protecting the
network. A firewall is a network device used to filter traffic. It is typically deployed between a
private network and the internet, but it can also be deployed between departments
(segmented networks) within an organization (overall network). Firewalls filter traffic based on
a defined set of rules, also called filters or access control lists.

Server

A server is a computer that provides information to other computers on a network. Some


common servers are web servers, email servers, print servers, database servers and file
servers. All of these are, by design, networked and accessed in some way by a client
computer. Servers are usually secured differently than workstations to protect the
information they contain.

Endpoint
Endpoints are the ends of a network communication link. One end is often at a server where
a resource resides, and the other end is often a client making a request to use a network
resource. An endpoint can be another server, desktop workstation, laptop, tablet, mobile
phone or any other end user device.

Other Networking Terms


Ethernet
Ethernet (IEEE 802.3) is a standard that defines wired connections of

networked devices. This standard defines the way data is formatted over

the wire to ensure disparate devices can communicate over the same cables.

Device Address

•Media Access Control (MAC) Address - Every network device is assigned a Media
Access Control (MAC) address. An example is 00-13-02-1F-58-F5. The first
3 bytes (24 bits) of the address denote the vendor or manufacturer of the physical
network interface. No two devices can have the same MAC address in the same
local network; otherwise an address conflict occurs.

•Internet Protocol (IP) Address - While MAC addresses are generally assigned in
the firmware of the interface, IP hosts associate that address with a unique logical
address. This logical IP address represents the network interface within the
network and can be useful to maintain communications when a physical device is
swapped with new hardware. Examples are 192.168.1.1 and 2001:db8::ffff:0:1.

Networking at a Glance
This diagram represents a small business network, which we will build upon during this

lesson. The lines depict wired connections. Notice how all devices behind the firewall
connect via the network switch, and the firewall lies between the network switch and the

internet.

https://

Ro u ter
Internet

L a p t op

Fi r ewall

L a p t op

S w it c h Wi r eless
Se r ver
Access

Point
L a p t op

T ablet
W orkstation W orkstation W orkstation W orkstation Phone

The network diagram below represents a typical home network. Notice the primary

difference between the home network and the business network is that the router,

firewall, and network switch are often combined into one device supplied by your

internet provider and shown here as the wireless access point.

Networking Models
Many different models, architectures and standards exist that provide ways to

interconnect different hardware and software systems with each other for the purposes

of sharing information, coordinating their activities and accomplishing joint or shared

tasks.

Computers and networks emerge from the integration of communication devices,

storage devices, processing devices, security devices, input devices, output devices,

operating systems, software, services, data and people.

Translating the organization’s security needs into safe, reliable and effective network

systems needs to start with a simple premise. The purpose of all communications is to

exchange information and ideas between people and organizations so that they can get

work done.

Those simple goals can be re-expressed in network (and security) terms such as:

• Provide reliable, managed communications between hosts (and users)


• Isolate functions in layers
• Use packets as the basis of communication
• Standardize routing, addressing and control
• Allow layers beyond internetworking to add functionality
• Be vendor-agnostic, scalable and resilient
In the most basic form, a network model has at least two layers:

Select each plus sign hotspot to learn more about each topic.
Upper Layer
Application 7

Application
P r esentation 6

Session 5

Lower Layer Transport 4

Network 3
Data

T ransport
Data Link 2

Physical 1

Open Systems Interconnection (OSI) Model

The OSI Model was developed to establish a common way to describe the communication structure for

interconnected computer systems.

The OSI model serves as an abstract framework, or theoretical model, for how protocols should function in an

ideal world, on ideal hardware. Thus, the OSI model has become a common conceptual reference that is used

to understand the communication of various hierarchical components from software interfaces to physical

hardware.

The OSI model divides networking tasks into seven distinct layers. Each layer is responsible for performing

specific tasks or operations with the goal of supporting data exchange (in other words, network

communication) between two computers. The layers are interchangeably referenced by name or layer number.

For example, Layer 3 is also known as the Network Layer.

The layers are ordered specifically to indicate how information flows through the various levels of

communication. Each layer communicates directly with the layer above and the layer below it.

For example, Layer 3 communicates with both the Data Link (2) and Transport (4) layers.
The Application, Presentation, and Session Layers (5-7) are commonly referred to simply as data. However,

each layer has the potential to perform encapsulation. Encapsulation is the addition of header and possibly a

footer (trailer) data by a protocol used at that layer of the OSI model. Encapsulation is particularly important

when discussing Transport, Network and Data Link layers (2-4), which all generally include some form of

header. At the Physical Layer (1), the data unit is converted into binary, i.e., 01010111, and sent across physical

wires such as an ethernet cable.

It's worth mapping some common networking terminology to the OSI Model so you can see the value in the

conceptual model.

Consider the following examples:

• When someone references an image file like a JPEG or PNG, we are talking about the Presentation
Layer (6).
• When discussing logical ports such as NetBIOS, we are discussing the Session Layer (5).
• When discussing TCP/UDP, we are discussing the Transport Layer (4).
• When discussing routers sending packets, we are discussing the Network Layer (3).
• When discussing switches, bridges or WAPs sending frames, we are discussing the Data Link Layer (2).

Encapsulation occurs as the data moves down the OSI model from Application to Physical. As data is

encapsulated at each descending layer, the previous layer’s header, payload and footer are all treated as the

next layer’s payload. The data unit size increases as we move down the conceptual model and the contents

continue to encapsulate.

The inverse action occurs as data moves up the OSI model layers from Physical to Application. This process is

known as de-encapsulation (or decapsulation). The header and footer are used to properly interpret the data

payload and are then discarded. As we move up the OSI model, the data unit becomes smaller. The

encapsulation/de-encapsulation process is best depicted visually below:


7 Application DATA
6 P resentation Header

DATA
5 Session DATA
4 Transport DATA
3 Network DATA
2 Data Link DATA Footer

1 Physical DATA

Transmission Control Protocol/Internet Protocol


(TCP/IP)
The OSI model wasn’t the first or only attempt to streamline networking protocols or establish a common
communications standard. In fact, the most widely used protocol today, TCP/IP, was developed in the early
1970s. The OSI model was not developed until the late 1970s. The TCP/IP protocol stack focuses on the core
functions of networking.

TCP/IP Protocol Architecture Layers

Application Layer Defines the protocols for the transport layer.

Transport Layer Permits data to move among devices.

Internet Layer Creates/inserts packets.

Network Interface Layer How data moves through the network.


Transmission Control Protocol/Internet Protocol (TCP/IP)
At the Application Layer, TCP/IP protocols include Telnet, File Transfer Protocol (FTP), Simple Mail
Transport Protocol (SMTP), and Domain Name Service (DNS).

The two primary Transport Layer protocols of TCP/IP are TCP and UDP. TCP is a full-duplex
connection-oriented protocol, whereas UDP is a simplex connectionless protocol. In the Internet Layer,
Internet Control Message Protocol (ICMP) is used to determine the health of a network or a specific
link. ICMP is utilized by ping, traceroute and other network management tools. The ping utility
employs ICMP echo packets and bounces them off remote systems. Thus, you can use ping to
determine whether the remote system is online, whether the remote system is responding promptly,
whether the intermediary systems are supporting communications, and the level of performance
efficiency at which the intermediary systems are communicating.
Internet Protocol (IPv4 and IPv6)
IP is currently deployed and used worldwide in two major versions. IPv4 provides a 32-bit address
space, which by the late 1980s was projected to be exhausted. IPv6 was introduced in December 1995
and provides a 128-bit address space along with several other important features.

IP hosts/devices associate an address with a unique logical address. An IPv4 address is expressed as
four octets separated by a dot (.), for example, 216.12.146.140. Each octet may have a value between 0
and 255. However, 0 is the network itself (not a device on that network), and 255 is generally reserved
for broadcast purposes. Each address is subdivided into two parts: the network number and the host.
The network number assigned by an external organization, such as the Internet Corporation for
Assigned Names and Numbers (ICANN), represents the organization’s network. The host represents
the network interface within the network.

To ease network administration, networks are typically divided into subnets. Because subnets cannot be
distinguished with the addressing scheme discussed so far, a separate mechanism, the subnet mask, is
used to define the part of the address used for the subnet. The mask is usually converted to decimal
notation like 255.255.255.0.
With the ever-increasing number of computers and networked devices, it is clear that IPv4 does not
provide enough addresses for our needs. To overcome this shortcoming, IPv4 was sub-divided into
public and private address ranges. Public addresses are limited with IPv4, but this issue was addressed
in part with private addressing. Private addresses can be shared by anyone, and it is highly likely that
everyone on your street is using the same address scheme.

The nature of the addressing scheme established by IPv4 meant that network designers had to start
thinking in terms of IP address reuse. IPv4 facilitated this in several ways, such as its creation of the
private address groups; this allows every LAN in every SOHO (small office, home office) situation to
use addresses such as 192.168.2.xxx for its internal network addresses, without fear that some other
system can intercept traffic on their LAN.

This table shows the private addresses available for anyone to use:

Range

10.0.0.0 to 10.255.255.254

172.16.0.0 to 172.31.255.254

192.168.0.0 to 192.168.255.254

The first octet of 127 is reserved for a computer’s loopback address. Usually, the address 127.0.0.1 is
used. The loopback address is used to provide a mechanism for self-diagnosis and troubleshooting at
the machine level. This mechanism allows a network administrator to treat a local machine as if it were
a remote machine and ping the network interface to establish whether it is operational.

IPv6 is a modernization of IPv4, which addressed a number of weaknesses in the IPv4 environment:

A much larger address field: IPv6 addresses are 128 bits, which supports
2128 or 340,282,366,920,938,463,463,374,607,431,768,211,456 hosts. This ensures that we will not run
out of addresses.
Improved security: IPsec is an optional part of IPv4 networks, but a mandatory component
of IPv6 networks. This will help ensure the integrity and confidentiality of IP packets and allow
communicating partners to authenticate with each other.
Improved quality of service (QoS): This will help services obtain an appropriate share of a network’s
bandwidth.
An IPv6 address is shown as 8 groups of four digits. Instead of numeric (0-9) digits like IPv4, IPv6
addresses use the hexadecimal range (0000-ffff) and are separated by colons (:) rather than periods (.).
An example IPv6 address is 2001:0db8:0000:0000:0000:ffff:0000:0001. To make it easier for humans
to read and type, it can be shortened by removing the leading zeros at the beginning of each field and
substituting two colons (::) for the longest consecutive zero fields. All fields must retain at least one
digit. After shortening, the example address above is rendered as 2001:db8::ffff:0:1, which is much
easier to type. As in IPv4, there are some addresses and ranges that are reserved for special uses:

::1 is the local loopback address, used the same as 127.0.0.1 in IPv4.
The range 2001:db8:: to 2001:db8:ffff:ffff:ffff:ffff:ffff:ffff is reserved for documentation use, just like in
the examples above.
fc00:: to fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff are addresses reserved for internal network use and are not
routable on the internet.
wireless network

What is WiFi?
Wireless networking is a popular method of connecting corporate and home systems

because of the ease of deployment and relatively low cost. It has made networking more

versatile than ever before. Workstations and portable systems are no longer tied to a

cable but can roam freely within the signal range of the deployed wireless access points.

However, with this freedom comes additional vulnerabilities.

Wi-Fi range is generally wide enough for most homes or small offices, and range

extenders may be placed strategically to extend the signal for larger campuses or homes.

Over time the Wi-Fi standard has evolved, with each updated version faster than the last.

In a LAN, threat actors need to enter the physical space or immediate vicinity of the

physical media itself. For wired networks, this can be done by placing sniffer taps onto

cables, plugging in USB devices, or using other tools that require physical access to the

network. By contrast, wireless media intrusions can happen at a distance.

Database server
Security of the Network
TCP/IP’s vulnerabilities are numerous. Improperly implemented TCP/IP stacks in various operating systems are

vulnerable to various DoS/DDoS attacks, fragment attacks, oversized packet attacks, spoofing attacks,

and man-in-the-middle attacks.

TCP/IP (as well as most protocols) is also subject to passive attacks via monitoring or sniffing. Network

monitoring, or sniffing, is the act of monitoring traffic patterns to obtain information about a network.

Ports and Protocols (Applications/Services)


There are physical ports that you connect wires to and logical ports that determine where the data/traffic goes.

Physical ports
logical port

Physical ports

Physical ports are the ports on the routers, switches, servers, computers, etc. that you
connect the wires, e.g., fiber optic cables, Cat5 cables, etc., to create a network.

logical ports

When a communication connection is established between two systems, it is done using


ports. A logical port (also called a socket) is little more than an address number that both
ends of the communication link agree to use when transferring data. Ports allow a single
IP address to be able to support multiple simultaneous communications, each using a
different port number. In the Application Layer of the TCP/IP model (which includes the
Session, Presentation, and Application Layers of the OSI model) reside numerous
application- or service-specific protocols. Data types are mapped using port numbers
associated with services. For example, web traffic (or HTTP) is port 80. Secure web traffic
(or HTTPS) is port 443. Table 5.4 highlights some of these protocols and their customary
or assigned ports. You’ll note that in several cases a service (or protocol) may have two
ports assigned, one secure and one insecure. When in doubt, systems should be
implemented using the most secure version as possible of a protocol and its services.

•Well-known ports (0–1023): These ports are related to the common protocols that
are at the core of the Transport Control Protocol/Internet Protocol (TCP/IP) model,
Domain Name Service (DNS), Simple Mail Transfer Protocol (SMTP), etc.
•Registered ports (1024–49151): These ports are often associated with proprietary
applications from vendors and developers. While they are officially approved by
the Internet Assigned Numbers Authority (IANA), in practice many vendors simply
implement a port of their choosing. Examples include Remote Authentication Dial-
In User Service (RADIUS) authentication (1812), Microsoft SQL Server (1433/1434)
and the Docker REST API (2375/2376).
•Dynamic or private ports (49152–65535): Whenever a service is requested that is
associated with well-known or registered ports, those services will respond with a
dynamic port that is used for that session and then released.

Secure Ports

Insecure Port Description Protocol Secure Protocol


Alternative
Port

21 - FTP Port 21, File Transfer File Transfer 22* - SFTP Secure File Transfer
Protocol (FTP) sends Protocol Protocol
the username and
password using
plaintext from the
client to the server. This
could be intercepted by
an attacker and later
used to retrieve
confidential
information from the
server. The secure
alternative, SFTP, on
port 22 uses encryption
to protect the user
credentials and packets
of data being
transferred.

SYN, SYN-ACK, ACK Handshake

Preventing Threats
While there is no single step you can take to protect against all threats, there are some basic

steps you can take that help reduce the risk of many types of threats.

• Keep systems and applications up to date. Vendors regularly release patches to correct
bugs and security flaws, but these only help when they are applied. Patch management
ensures that systems and applications are kept up to date with relevant patches.
• Remove or disable unneeded services and protocols. If a system doesn’t need a service
or protocol, it should not be running. Attackers cannot exploit a vulnerability in a service
or protocol that isn’t running on a system. As an extreme contrast, imagine a web server
is running every available service and protocol. It is vulnerable to potential attacks on
any of these services and protocols.
• Use intrusion detection and prevention systems. As discussed, intrusion detection and
prevention systems observe activity, attempt to detect threats and provide alerts. They
can often block or stop attacks.
• Use up-to-date anti-malware software. We have already covered the various types of
malicious code such as viruses and worms. A primary countermeasure is anti-malware
software.
• Use firewalls. Firewalls can prevent many different types of threats. Network-based
firewalls protect entire networks, and host-based firewalls protect individual systems.
This chapter included a section describing how firewalls can prevent attacks.

Antivirus
The use of antivirus products is strongly encouraged as a security best practice and is a

requirement for compliance with the Payment Card Industry Data Security Standard

(PCI DSS). There are several antivirus products available, and many can be deployed as

part of an enterprise solution that integrates with several other security products.

Antivirus systems try to identify malware based on the signature of known malware or by

detecting abnormal activity on a system. This identification is done with various types

of scanners, pattern recognition and advanced machine learning algorithms.


Anti-malware now goes beyond just virus protection as modern solutions try to provide a

more holistic approach detecting rootkits, ransomware and spyware. Many endpoint

solutions also include software firewalls and IDS or IPS systems.

Scans
Here is an example scan from Zenmap showing open ports on a host.

Regular vulnerability and port scans are a good way to evaluate the effectiveness of

security controls used within an organization. They may reveal areas where patches or

security settings are insufficient, where new vulnerabilities have developed or become

exposed, and where security policies are either ineffective or not being followed.

Attackers can exploit any of these vulnerabilities.


Firewalls
In building construction or vehicle design, a firewall is a specially built physical barrier

that prevents the spread of fire from one area of the structure to another or from one

compartment of a vehicle to another. Early computer security engineers borrowed that

name for the devices and services that isolate network segments from each other, as a

security measure. As a result, firewalling refers to the process of designing, using or

operating different processes in ways that isolate high-risk activities from lower-risk

ones.

Firewalls enforce policies by filtering network traffic based on a set of rules. While a

firewall should always be placed at internet gateways, other internal network

considerations and conditions determine where a firewall would be employed, such as

network zoning or segregation of different levels of sensitivity. Firewalls have rapidly

evolved over time to provide enhanced security capabilities. This growth in capabilities

can be seen in Figure 5.37, which contrasts an oversimplified view of traditional and next-

generation firewalls. It integrates a variety of threat management capabilities into a

single framework, including proxy services, intrusion prevention services (IPS) and tight

integration with the identity and access management (IAM) environment to ensure only

authorized users are permitted to pass traffic across the infrastructure. While firewalls

can manage traffic at Layers 2 (MAC addresses), 3 (IP ranges) and 7 (application

programming interface (API) and application firewalls), the traditional implementation

has been to control traffic at Layer 4.


Intrusion Prevention System (IPS)
An intrusion prevention system (IPS) is a special type of active IDS that automatically

attempts to detect and block attacks before they reach target systems. A distinguishing

difference between an IDS and an IPS is that the IPS is placed in line with the traffic. In

other words, all traffic must pass through the IPS and the IPS can choose what traffic to

forward and what traffic to block after analyzing it. This allows the IPS to prevent an

attack from reaching a target. Since IPS systems are most effective at preventing

network-based attacks, it is common to see the IPS function integrated into firewalls. Just

like IDS, there are Network-based IPS (NIPS) and Host-based IPS (HIPS).

https://

Ro u ter
Internet

Fi r ewall
L a p t op

NIPS

NIDS
L a p t op

S w it c h Wi r eless
Se r ver
Access

Point
L a p t op

T ablet
W orkstation W orkstation W orkstation W orkstation Phone
Memorandum of Understanding
(MOU)/Memorandum of Agreement
(MOA)
Some organizations seeking to minimize downtime and enhance BC (Business

Continuity) and DR (Disaster Recovery) capabilities will create agreements with other,

similar organizations. They agree that if one of the parties experiences an emergency

and cannot operate within their own facility, the other party will share its resources and

let them operate within theirs in order to maintain critical functions. These agreements

often even include competitors, because their facilities and resources meet the needs of

their particular industry.

For example, Hospital A and Hospital B are competitors in the same city. The hospitals

create an agreement with each other: if something bad happens to Hospital A (a fire,

flood, bomb threat, loss of power, etc.), that hospital can temporarily send personnel and

systems to work inside Hospital B in order to stay in business during the interruption

(and Hospital B can relocate to Hospital A, if Hospital B has a similar problem). The

hospitals have decided that they are not going to compete based on safety and security

—they are going to compete on service, price and customer loyalty. This way, they

protect themselves and the healthcare industry as a whole.


These agreements are called joint operating agreements (JOA) or memoranda of

understanding (MOU) or memoranda of agreement (MOA). Sometimes these agreements

are mandated by regulatory requirements, or they might just be part of the

administrative safeguards instituted by an entity within the guidelines of its industry.

The difference between an MOA or MOU and an SLA is that a Memorandum of

Understanding is more directly related to what can be done with a system or the

information.

The service level agreement goes down to the granular level. For example, if I'm

outsourcing the IT services, then I will need to have two full-time technicians readily

available, at least from Monday through Friday from eight to five. With cloud computing, I

need to have access to the information in my backup systems within 10 minutes. An SLA

specifies the more intricate aspects of the services.

We must be very cautious when outsourcing with cloud-based services, because we have

to make sure that we understand exactly what we are agreeing to. If the SLA promises

100 percent accessibility to information, is the access directly to you at the moment, or is

it access to their website or through their portal when they open on Monday? That's

where you'll rely on your legal team, who can supervise and review the conditions

carefully before you sign the dotted line at the bottom.


Network Security Infrastructure
On-Premises Data Centers( အသေးစာdata center)

When it comes to data centers, there are two primary options: organizations can outsource the data center or

own the data center. If the data center is owned, it will likely be built on premises. A place, like a building for the

data center is needed, along with power, HVAC, fire suppression and redundancy.
Redundancy
The concept of redundancy is to design systems with duplicate components so that if a failure were to occur,

there would be a backup. This can apply to the data center as well. Risk assessments pertaining to the data

center should identify when multiple separate utility service entrances are necessary for redundant

communication channels and/or mechanisms.

If the organization requires full redundancy, devices should have two power supplies connected to diverse

power sources. Those power sources would be backed up by batteries and generators. In a high-availability

environment, even generators would be redundant and fed by different fuel types.
Cloud
Cloud computing is usually associated with an internet-based set of computing resources, and typically sold as
a service, provided by a cloud service provider (CSP).

Cloud computing is very similar to the electrical or power grid. It is provisioned in a geographic location and is

sourced using an electrical means that is not necessarily obvious to the consumer. But when you want

electricity, it’s available to you via a common standard interface and you pay only for what you use. In these

ways, cloud computing is very similar. It is a very scalable, elastic and easy-to-use “utility” for the provisioning

and deployment of Information Technology (IT) services.

There are various definitions of what cloud computing means according to the leading standards, including

NIST. This NIST definition is commonly used around the globe, cited by professionals and others alike to clarify

what the term “cloud” means:

“a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable
computing resources (such as networks, servers, storage, applications, and services) that can be rapidly

provisioned and released with minimal management effort or service provider interaction.” NIST SP 800-145

This image depicts cloud computing characteristics, service and deployment models, all of which will be

covered in this section and by your instructor.


Service Models
Some cloud-based services only provide data storage and access. When storing data in the cloud, organizations

must ensure that security controls are in place to prevent unauthorized access to the data.

There are varying levels of responsibility for assets depending on the service model. This includes maintaining

the assets, ensuring they remain functional, and keeping the systems and applications up to date with current

patches. In some cases, the cloud service provider is responsible for these steps. In other cases, the consumer

is responsible for these steps.

Types of cloud computing service models include Software as a Service (SaaS) , Platform as a Service

(PaaS) and Infrastructure as a Service (IaaS).

Software as a Service (SaaS)

Platform as a Service (PaaS)

Infrastructure as a Service (IaaS)

Deployment Models
There are four cloud deployment models. The cloud deployment model also affects the breakdown of

responsibilities of the cloud-based assets. The four cloud models available

are public, private, hybrid and community .


Managed Service Provider (MSP)
A managed service provider (MSP) is a company that manages information technology assets for another

company. Small- and medium-sized businesses commonly outsource part or all of their information technology

functions to an MSP to manage day-to-day operations or to provide expertise in areas the company does not

have. Organizations may also use an MSP to provide network and security monitoring and patching services.

Today, many MSPs offer cloud-based services augmenting SaaS solutions with active incident investigation and

response activities. One such example is a managed detection and response (MDR) service, where a vendor

monitors firewall and other security tools to provide expertise in triaging events.

Some other common MSP implementations are:

• Augment in-house staff for projects


• Utilize expertise for implementation of a product or service
• Provide payroll services
• Provide Help Desk service management
• Monitor and respond to security incidents
• Manage all in-house IT infrastructure
Service-Level Agreement (SLA)
The cloud computing service-level agreement (cloud SLA) is an agreement between a cloud service provider

and a cloud service customer based on a taxonomy of cloud computing– specific terms to set the quality of the

cloud services delivered. It characterizes quality of the cloud services delivered in terms of a set of measurable

properties specific to cloud computing (business and technical) and a given set of cloud computing roles (cloud

service customer, cloud service provider, and related sub-roles).

Think of a rule book and legal contract—that combination is what you have in a service-level agreement (SLA).

Let us not underestimate or downplay the importance of this document/ agreement. In it, the minimum level of
service, availability, security, controls, processes, communications, support and many other crucial business

elements are stated and agreed to by both parties.

The purpose of an SLA is to document specific parameters, minimum service levels and remedies for any failure

to meet the specified requirements. It should also affirm data ownership and specify data return and

destruction details. Other important SLA points to consider include the following:

• Cloud system infrastructure details and security standards


• Customer right to audit legal and regulatory compliance by the CSP
• Rights and costs associated with continuing and discontinuing service use
• Service availability
• Service performance
• Data security and privacy
• Disaster recovery processes
• Data location
• Data access
• Data portability
• Problem identification and resolution expectations
• Change management processes
• Dispute mediation processes
• Exit strategy

Network Design
The objective of network design is to satisfy data communication requirements and result in efficient overall

performance.

Network segmentation involves controlling traffic among networked devices. Complete


or physical network segmentation occurs when a network is isolated from all outside
communications, so transactions can only occur between devices within the segmented
network.

A DMZ is a network area that is designed to be accessed by outside visitors but is still

isolated from the private network of the organization. The DMZ is often the host of public

web, email, file and other resource servers.


VLANs are created by switches to logically segment a network without altering its physical

topology.

A virtual private network (VPN) is a communication tunnel that provides point-to-point


transmission of both authentication and data traffic over an untrusted network.

Defense in depth uses multiple types of access controls in literal or theoretical layers to help
an organization avoid a monolithic security stance.

Network access control (NAC) is a concept of controlling access to an environment


through strict adherence to and implementation of security policy.

Defense in Depth
Defense in depth uses a layered approach when designing the security posture of an organization. Think about

a castle that holds the crown jewels. The jewels will be placed in a vaulted chamber in a central location

guarded by security guards. The castle is built around the vault with additional layers of security—soldiers,

walls, a moat. The same approach is true when designing the logical security of a facility or system. Using layers

of security will deter many attackers and encourage them to focus on other, easier targets.

Defense in depth provides more of a starting point for considering all types of controls—administrative,

technological, and physical—that empower insiders and operators to work together to protect their

organization and its systems.

Here are some examples that further explain the concept of defense in depth:

• Data: Controls that protect the actual data with technologies such as encryption, data leak prevention,
identity and access management and data controls.
• Application: Controls that protect the application itself with technologies such as data leak prevention,
application firewalls and database monitors.
• Host: Every control that is placed at the endpoint level, such as antivirus, endpoint firewall,
configuration and patch management.
• Internal network: Controls that are in place to protect uncontrolled data flow and user access across
the organizational network. Relevant technologies include intrusion detection systems, intrusion
prevention systems, internal firewalls and network access controls.
• Perimeter: Controls that protect against unauthorized access to the network. This level includes the use
of technologies such as gateway firewalls, honeypots, malware analysis and secure demilitarized zones
(DMZs).
• Physical: Controls that provide a physical barrier, such as locks, walls or access control.
• Policies, procedures and awareness: Administrative controls that reduce insider threats (intentional and
unintentional) and identify risks as soon as they appear.

Zero Trust
Zero trust networks are often microsegmented networks, with firewalls at nearly every

connecting point. Zero trust encapsulates information assets, the services that apply to

them and their security properties. This concept recognizes that once inside a trust-but-

verify environment, a user has perhaps unlimited capabilities to roam around, identify

assets and systems and potentially find exploitable vulnerabilities. Placing a greater

number of firewalls or other security boundary control devices throughout the network

increases the number of opportunities to detect a troublemaker before harm is done.


Many enterprise architectures are pushing this to the extreme of microsegmenting their

internal networks, which enforces frequent re-authentication of a user ID, as depicted in

this image.

Consider a rock music concert. By traditional perimeter controls, such as firewalls, you

would show your ticket at the gate and have free access to the venue, including

backstage where the real rock stars are. In a zero-trust environment, additional

checkpoints are added. Your identity (ticket) is validated to access the floor level seats,

and again to access the backstage area. Your credentials must be valid at all 3 levels to

meet the stars of the show.

Zero trust is an evolving design approach which recognizes that even the most robust

access control systems have their weaknesses. It adds defenses at the user, asset and

data level, rather than relying on perimeter defense. In the extreme, it insists that every

process or action a user attempts to take must be authenticated and authorized; the

window of trust becomes vanishingly small.

While microsegmentation adds internal perimeters, zero trust places the focus on the

assets, or data, rather than the perimeter. Zero trust builds more effective gates to

protect the assets directly rather than building additional or higher walls.
Network Access Control (NAC)
An organization’s network is perhaps one of its most critical assets. As such, it is vital that we both know and

control access to it, both from insiders (e.g., employees, contractors) and outsiders (e.g., customers, corporate

partners, vendors). We need to be able to see who and what is attempting to make a network connection.

At one time, network access was limited to internal devices. Gradually, that was extended to remote

connections, although initially those were the exceptions rather than the norm. This started to change with the

concepts of bring your own device (BYOD) and Internet of Things (IoT).

Considering just IoT for a moment, it is important to understand the range of devices that might be found

within an organization. They include heating, ventilation and air conditioning (HVAC) systems that monitor the

ambient temperature and adjust the heating or cooling levels automatically or air monitoring systems, through

security systems, sensors and cameras, right down to vending and coffee machines. Look around your own

environment and you will quickly see the scale of their use.

Having identified the need for a NAC solution, we need to identify what capabilities a solution may provide. As

we know, everything begins with a policy. The organization’s access control policies and associated security

policies should be enforced via the NAC device(s). Remember, of course, that an access control device only

enforces a policy and doesn’t create one.


The NAC device will provide the network visibility needed for access security and may later be used for incident

response. Aside from identifying connections, it should also be able to provide isolation for noncompliant

devices within a quarantined network and provide a mechanism to “fix” the noncompliant elements, such as

turning on endpoint protection. In short, the goal is to ensure that all devices wishing to join the network do so

only when they comply with the requirements laid out in the organization policies. This visibility will encompass

internal users as well as any temporary users such as guests or contractors, etc., and any devices they may

bring with them into the organization.

Let’s consider some possible use cases for NAC deployment:

• Medical devices
• IoT devices
• BYOD/mobile devices (laptops, tablets, smartphones)
• Guest users and contractors
As we have established, it is critically important that all mobile devices, regardless of their owner, go through an

onboarding process, ideally each time a network connection is made, and that the device is identified and

interrogated to ensure the organization’s policies are being met.

Network Segmentation (Demilitarized

Zone (DMZ))

Network segmentation is also an effective way to achieve defense in depth for

distributed or multi-tiered applications. The use of a demilitarized zone (DMZ), for

example, is a common practice in security architecture. With a DMZ, host systems that

are accessible through the firewall are physically separated from the internal network by

means of secured switches or by using an additional firewall to control traffic between

the web server and the internal network. Application DMZs (or semi-trusted networks)

are frequently used today to limit access to application servers to those networks or

systems that have a legitimate need to connect.


Segmentation for Embedded Systems and IoT
An embedded system is a computer implemented as part of a larger system. The embedded system is typically

designed around a limited set of specific functions in relation to the larger product of which it is a component.

Examples of embedded systems include network-attached printers, smart TVs, HVAC controls, smart

appliances, smart thermostats and medical devices.

Network-enabled devices are any type of portable or nonportable device that has native network capabilities.

This generally assumes the network in question is a wireless type of network, typically provided by a mobile

telecommunications company. Network-enabled devices include smartphones, mobile phones, tablets, smart

TVs or streaming media players (such as a Roku Player, Amazon Fire TV, or Google Android TV/Chromecast),

network-attached printers, game systems, and much more.

The Internet of Things (IoT) is the collection of devices that can communicate over the internet with one

another or with a control console in order to affect and monitor the real world. IoT devices might be labeled as

smart devices or smart-home equipment. Many of the ideas of industrial environmental control found in office

buildings are finding their way into more consumer-available solutions for small offices or personal homes.

Embedded systems and network-enabled devices that communicate with the internet are considered IoT

devices and need special attention to ensure that communication is not used in a malicious manner. Because

an embedded system is often in control of a mechanism in the physical world, a security breach could cause

harm to people and property. Since many of these devices have multiple access routes, such as ethernet,

wireless, Bluetooth, etc., special care should be taken to isolate them from other devices on the network. You

can impose logical network segmentation with switches using VLANs, or through other traffic-control means,
including MAC addresses, IP addresses, physical ports, protocols, or application filtering, routing, and access

control management. Network segmentation can be used to isolate IoT environments.

Microsegmentation
The toolsets of current adversaries are polymorphic in nature and allow threats to bypass static security

controls. Modern cyberattacks take advantage of traditional security models to move easily between systems

within a data center. Microsegmentation aids in protecting against these threats. A fundamental design

requirement of microsegmentation is to understand the protection requirements for traffic within a data center

and traffic to and from the internet traffic flows.


When organizations avoid infrastructure-centric design paradigms, they are more likely to become more

efficient at service delivery in the data center and become apt at detecting and preventing advanced persistent

threats.

Virtual Local Area Network (VLAN)


Virtual local area networks (VLANs) allow network administrators to use switches to create software-based

LAN segments, which can segregate or consolidate traffic across multiple switch ports. Devices that share a

VLAN communicate through switches as if they were on the same Layer 2 network. This image shows different

VLANs — red, green and blue — connecting separate sets of ports together, while sharing the same network

segment (consisting of the two switches and their connection). Since VLANs act as discrete networks,

communications between VLANs must be enabled. Broadcast traffic is limited to the VLAN, reducing congestion

and reducing the effectiveness of some attacks. Administration of the environment is simplified, as the VLANs

can be reconfigured when individuals change their physical location or need access to different services. VLANs

can be configured based on switch port, IP subnet, MAC address and protocols.

VLANs do not guarantee a network’s security. At first glance, it may seem that traffic cannot be intercepted

because communication within a VLAN is restricted to member devices. However, there are attacks that allow a

malicious user to see traffic from other VLANs (so-called VLAN hopping). The VLAN technology is only one tool

that can improve the overall security of the network environment.


Virtual Private Network (VPN)
A virtual private network (VPN) is not necessarily an encrypted tunnel. It is simply a point-to-point connection

between two hosts that allows them to communicate. Secure communications can, of course, be provided by

the VPN, but only if the security protocols have been selected and correctly configured to provide a trusted

path over an untrusted network, such as the internet. Remote users employ VPNs to access their organization’s

network, and depending on the VPN’s implementation, they may have most of the same resources available to

them as if they were physically at the office. As an alternative to expensive dedicated point-to-point

connections, organizations use gateway-to-gateway VPNs to securely transmit information over the internet

between sites or even with business partners.


Chapter 5: Security Operations
Chapter 5 Agenda
Module 1: Understand Data Security (D5.1)
Module 2: Understand System Hardening (D5.2)
Module 3: Understand Best Practice Security Policies (D5.3)
Module 4: Understand Security Awareness Training (D5.3, D5.4)
Chapter 5 Overview

Let’s take a more detailed look at the day-to-day, moment-by-moment active use of the security controls and

risk mitigation strategies that an organization has in place. We will explore ways to secure the data and the

systems they reside on, and how to encourage secure practices among people who interact with the data and

systems during their daily duties.


Learning Objectives

Domain 5: Security Operations Objectives

After completing this chapter, the participant will be able to: L5

Explain concepts of security operations.L5.1.1

Discuss data handling best practices..2

Identify important concepts of logging and monitoring. L5.1.3

Summarize the different types of encryption and their common uses. L5.2.1

Describe the concepts of configuration management.

Explain the application of common security policies. L5.4.1

Discuss the importance of security awareness training. L5.5.1

Practice the terminology of and review the concepts of network operations.

Module 1: Understand Data Security


Domain D5.0, D5.1.1, D5.1.2, D5.1.3

Module Objective

 L5.0 Explain concepts of security operations.


 L5.1.1 Discuss data handling best practices.
 L5.1.2 Identify key concepts of logging and monitoring.
 L5.1.3 Summarize the different types of encryption and their common uses

Hardening is the process of applying secure configurations (to reduce the attack surface) and locking
down various hardware, communications systems and software, including the operating system, web
server, application server and applications, etc. In this module, we will introduce configuration
management practices that will ensure systems are installed and maintained according to industry and
organizational security standards.
Data Handling
Data itself goes through its own life cycle as users create, use, share and modify it. Many different models of

the life of a data item can be found, but they all have some basic operational steps in common. The data

security life cycle model is useful because it can align easily with the different roles that people and

organizations perform during the evolution of data from creation to destruction (or disposal). It also helps put

the different data states of in use, at rest and in motion, into context. Let’s take a closer look.

All ideas, data, information or knowledge can be thought of as going through six major sets of activities

throughout its lifetime. Conceptually, these involve:

1 Create
2 Store
3 Share
4 Use
5 Archive
6 Destroy
Create
Creating the knowledge, which is usually tacit knowledge at this point.

Store
Storing or recording it in some fashion (which makes it explicit).
Use
Using the knowledge, which may cause the information to be modified, supplemented or partially deleted.

Share
Sharing the data with other users, whether as a copy or by moving the data from one location to another.

Archive
Archiving the data when it is temporarily not needed.
Destroy
Destroying the data when it is no longer needed.
Data Handling Practices
• Classification

• Labeling

• Retention

• Classification

• Retention

Classification (အမျိုးအစားခွဲခြားခြင်း )
Businesses recognize that information has value and others might steal their advantage

if the information is not kept confidential, so they classify it. These classifications dictate

rules and restrictions about how that information can be used, stored or shared with

others. All of this is done to keep the temporary value and importance of that

information from leaking away. Classification of data, which asks the question “Is it

secret?” determines the labeling, handling and use of all data.

Before any labels can be attached to sets of data that indicate its sensitivity or handling

requirements, the potential impact or loss to the organization needs to be assessed. This

is our first definition: Classification is the process of recognizing the organizational

impacts if the information suffers any security compromises related to its characteristics

of confidentiality, integrity and availability. Information is then labeled and handled

accordingly.
Classifications are derived from laws, regulations, contract-specified standards or other

business expectations. One classification might indicate “minor, may disrupt some

processes” while a more extreme one might be “grave, could lead to loss of life or

threaten ongoing existence of the organization.” These descriptions should reflect the

ways in which the organization has chosen (or been mandated) to characterize and

manage risks.

The immediate benefit of classification is that it can lead to more efficient design and

implementation of security processes, if we can treat the protection needs for all

similarly classified information with the same controls strategy.

Labeling(ထိန်းထား)
Security labels are part of implementing controls to protect classified information. It is

reasonable to want a simple way of assigning a level of sensitivity to a data asset, such

that the higher the level, the greater the presumed harm to the organization, and thus

the greater security protection the data asset requires. This spectrum of needs is useful,

but it should not be taken to mean that clear and precise boundaries exist between the

use of “low sensitivity” and “moderate sensitivity” labeling, for example.

Data Sensitivity Levels and Labels

Unless otherwise mandated, organizations are free to create classification systems that

best meet their own needs. In professional practice, it is typically best if the organization

has enough classifications to distinguish between sets of assets with differing

sensitivity/value, but not so many classifications that the distinction between them is
confusing to individuals. Typically, two or three classifications are manageable, and more

than four tend to be difficult.

•Highly restricted: Compromise of data with this sensitivity label could possibly put
the organization’s future existence at risk. Compromise could lead to substantial
loss of life, injury or property damage, and the litigation and claims that would
follow.
•Moderately restricted: Compromise of data with this sensitivity label could lead to
loss of temporary competitive advantage, loss of revenue or disruption of planned
investments or activities.
•Low sensitivity (sometimes called “internal use only”): Compromise of data with
this sensitivity label could cause minor disruptions, delays or impacts.
•Unrestricted public data: As this data is already published, no harm can come
from further dissemination or disclosure.

Retention
Information and data should be kept only for as long as it is beneficial, no more and no

less. For various types of data, certain industry standards, laws and regulations define

retention periods. When such external requirements are not set, it is an organization’s

responsibility to define and implement its own data retention policy. Data retention

policies are applicable both for hard copies and for electronic data, and no data should

be kept beyond its required or useful life. Security professionals should ensure that data

destruction is being performed when an asset has reached its retention limit. For the

security professional to succeed in this assignment, an accurate inventory must be

maintained, including the asset location, retention period requirement, and destruction

requirements. Organizations should conduct a periodic review of retained records in

order to reduce the volume of information stored and to ensure that only necessary

information is preserved.
Records retention policies indicate how long an organization is required to maintain

information and assets. Policies should guarantee that:

•Personnel understand the various retention requirements for data of different


types throughout the organization.
•The organization appropriately documents the retention requirements for each
type of information.
•The systems, processes and individuals of the organization retain information in
accordance with the required schedule but no longer.
A common mistake in records retention is applying the longest retention period to all

types of information in an organization. This not only wastes storage but also increases

risk of data exposure and adds unnecessary “noise” when searching or processing

information in search of relevant records. It may also be in violation of externally

mandated requirements such as legislation, regulations or contracts (which may result in

fines or other judgments). Records and information no longer mandated to be retained

should be destroyed in accordance with the policies of the enterprise and any

appropriate legal requirements that may need to be considered.

Destruction(ပျက်စီးခြင်း)

Data that might be left on media after deleting is known as remanence and may be a

significant security concern. Steps must be taken to reduce the risk that data remanence

could compromise sensitive information to an acceptable level. This can be done by one

of several means:

•Clearing the device or system, which usually involves writing multiple patterns of
random values throughout all storage media (such as main memory, registers and
fixed disks). This is sometimes called “overwriting” or “zeroizing” the system,
although writing zeros has the risk that a missed block or storage extent may still
contain recoverable, sensitive information after the process is completed.
•Purging the device or system, which eliminates (or greatly reduces) the chance
that residual physical effects from the writing of the original data values may still
be recovered, even after the system is cleared. Some magnetic disk storage
technologies, for example, can still have residual “ghosts” of data on their surfaces
even after being overwritten multiple times. Magnetic media, for example, can
often be altered sufficiently to meet security requirements; in more stringent
cases, degaussing may not be sufficient.
•Physical destruction of the device or system is the ultimate remedy to data
remanence. Magnetic or optical disks and some flash drive technologies may
require being mechanically shredded, chopped or broken up, etched in acid or
burned; their remains may be buried in protected landfills, in some cases.
In many routine operational environments, security considerations may accept that

clearing a system is sufficient. But when systems elements are to be removed and

replaced, either as part of maintenance upgrades or for disposal, purging or destruction

may be required to protect sensitive information from being compromised by an

attacker.

Logging and Monitoring Security Events


Logging and Monitoring Security Events
Logging is the primary form of instrumentation that attempts to capture signals generated by events. Events

are any actions that take place within the systems environment and cause measurable or observable change in

one or more elements or resources within the system. Logging imposes a computational cost but is invaluable

when determining accountability. Proper design of logging environments and regular log reviews remain best

practices regardless of the type of computer system.

Major controls frameworks emphasize the importance of organizational logging practices. Information that

may be relevant to being recorded and reviewed include (but is not limited to):

• user IDs
• system activities
• dates/times of key events (e.g., logon and logoff)
• device and location identity
• successful and rejected system and resource access attempts
• system configuration changes and system protection activation and deactivation events
Logging and monitoring the health of the information environment is essential to identifying inefficient or

improperly performing systems, detecting compromises and providing a record of how systems are used.

Robust logging practices provide tools to effectively correlate information from diverse systems to fully

understand the relationship between one activity and another.

Log reviews are an essential function not only for security assessment and testing but also for identifying

security incidents, policy violations, fraudulent activities and operational problems near the time of occurrence.

Log reviews support audits – forensic analysis related to internal and external investigations – and provide

support for organizational security baselines. Review of historic audit logs can determine if a vulnerability

identified in a system has been previously exploited.

It is helpful for an organization to create components of a log management infrastructure and determine how

these components interact. This aids in preserving the integrity of log data from accidental or intentional

modification or deletion and in maintaining the confidentiality of log data.

Controls are implemented to protect against unauthorized changes to log information. Operational problems

with the logging facility are often related to alterations to the messages that are recorded, log files being edited

or deleted, and storage capacity of log file media being exceeded. Organizations must maintain adherence to

retention policy for logs as prescribed by law, regulations and corporate governance. Since attackers want to
hide the evidence of their attack, the organization’s policies and procedures should also address the

preservation of original logs. Additionally, the logs contain valuable and sensitive information about the

organization. Appropriate measures must be taken to protect the log data from malicious use.

Events

Event Detali
Raw log (Also shown at the bottom of event detail)

Data Security Event Example

Here is a data security event example. It’s a raw log, and it is one way to see if someone
tried to break into a secure file and hijack the server. Of course, there are other systems
now that are a little more user-friendly. But security engineers get very familiar with
some of these codes and can figure out exactly who was trying to log it, was it a secure
port or a questionable port that they were trying to use to penetrate our site.

Information security is not something that you just plug in as needed. You can have

some patching on a system that already exists, such as updates, but if you don’t have a

secure system, you can’t just plug in something to protect it. From the very beginning, we

need to plan for that security, even before the data is introduced into the network.

Kjk
Event Logging Best Practices
Different tools are used depending on whether the risk from the attack is from traffic coming into or leaving the

infrastructure. Ingress monitoring refers to surveillance and assessment of all inbound communications traffic

and access attempts. Devices and tools that offer logging and alerting opportunities for ingress monitoring

include:

• Firewalls
• Gateways
• Remote authentication servers
• IDS/IPS tools
• SIEM solutions
• Anti-malware solutions
Egress monitoring is used to regulate data leaving the organization’s IT environment. The term currently used

in conjunction with this effort is data loss prevention (DLP) or data leak protection. The DLP solution should

be deployed so that it can inspect all forms of data leaving the organization, including:

• Email (content and attachments)


• Copy to portable media
• File Transfer Protocol (FTP)
• Posting to web pages/websites
• Applications/application programming interfaces (APIs)

Event Logging Best Practices


Different tools are used depending on whether the risk from the attack is from traffic coming into or leaving the

infrastructure. Ingress monitoring refers to surveillance and assessment of all inbound communications traffic

and access attempts. Devices and tools that offer logging and alerting opportunities for ingress monitoring

include:
• Firewalls
• Gateways
• Remote authentication servers
• IDS/IPS tools
• SIEM solutions
• Anti-malware solutions
Egress monitoring is used to regulate data leaving the organization’s IT environment. The term currently used

in conjunction with this effort is data loss prevention (DLP) or data leak protection. The DLP solution should

be deployed so that it can inspect all forms of data leaving the organization, including:

• Email (content and attachments)


• Copy to portable media
• File Transfer Protocol (FTP)
• Posting to web pages/websites
• Applications/application programming interfaces (APIs)

Module 3: Understand Best Practice Security


Policies

L5.3.1 Explain the application of common security policies.


An organization’s security policies define what “security” means to that organization, which in almost all
cases reflects the tradeoff between security, operability, affordability and potential risk impacts. Security
policies express or impose behavioral or other constraints on the system and its use. Well-designed systems
operating within these constraints should reduce the potential of security breaches to an acceptable level.
Security governance that does not align properly with organizational goals can lead to implementation of
security policies and decisions that unnecessarily inhibit productivity, impose undue costs and hinder
strategic intent.

You might also like