100% found this document useful (1 vote)
164 views

CCSK notes-MG

The document provides an overview of cloud computing concepts including: 1) It defines cloud computing as abstracting and pooling resources through orchestration to deliver them on demand, differing from traditional virtualization through orchestration. 2) It outlines the key characteristics of clouds including broad network access, rapid elasticity, measured service, on-demand self-service, resource pooling, and multitenancy. 3) It describes the three service models - SaaS, PaaS, and IaaS - and four deployment models - public, private, hybrid, and community clouds.

Uploaded by

N Sai Avinash
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
164 views

CCSK notes-MG

The document provides an overview of cloud computing concepts including: 1) It defines cloud computing as abstracting and pooling resources through orchestration to deliver them on demand, differing from traditional virtualization through orchestration. 2) It outlines the key characteristics of clouds including broad network access, rapid elasticity, measured service, on-demand self-service, resource pooling, and multitenancy. 3) It describes the three service models - SaaS, PaaS, and IaaS - and four deployment models - public, private, hybrid, and community clouds.

Uploaded by

N Sai Avinash
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

CCSK Official Guide

Tuesday, March 9, 2021 7:42 PM

Chapter 1

• Infect, taking an existing application and simply moving it to a cloud provider without any changes will often reduce agility,
resiliency and even security, all while increasing costs.

• The cloud service provider (CSP) creates and manages an offering that consists of pools of resources (Compute, Network,
Storage) that are accessed via controllers that communicate using application programming interfaces (APIs). It is through
these pools and API access that several essential characteristics of the cloud (such as self-service, elasticity, and resource
pooling) are possible.

Cloud definition

The key techniques to create a cloud are


a. Abstraction
b. Orchestration
We abstract the resources from the underlying physical infrastructure to create our pools, and use orchestration (and
automation) to coordinate carving out and delivering a set of resources from the pools to the consumers.

This is the difference between cloud computing and traditional virtualization; virtualization abstracts resources, but it typ ically
lacks the orchestration to pool them together and deliver them to customers on demand, instead relying on manual processes.

Multitenant: Clouds are multitenant by nature. Multiple different consumer constituencies share the same pool of resources bu t
segregated and isolated from each other.
Segregation : Allows the cloud provider to divvy up resources to the different groups
Isolation : Ensures they can't see or modify each other's assets.

Clouds Five essential Characteristics


1) Broad Network Access : means that all resources are available over a network, without any need for direct physical
access: the network is not necessarily part of the service.
2) Rapid Elasticity
3) Measured service
4) On-Demand Self Service
5) Resource Pooling

CCSK Certificate Page 1


5) Resource Pooling
6) Multitenancy : The only addition by ISO/IEC 17788

Three Service model


1) SAAS : Software as a Service
2) PASS : Platform as a Service
3) IAAS : Infrastructure as a Service
We sometimes calls it SPI tier

Four Deployment Model


1) Public
2) Private
3) Hybrid : The Cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain
unique entities but are bound together by standardized or proprietary technology that enables data and application
portability ( e.g., Cloud bursting for load balancing between clouds)
Hybrid is also commonly used to describe a non-cloud data center bridged directly to a cloud provider.
4) Community

• The key difference between a private and a community cloud is that the financial risk is shared across multiple
contractually trusted organizations in the community cloud.

The most important capabilities that are associated with the hybrid deployment model are portability and cloud bursting.
a. Portability is the ability to shift where a workload is executed—for example, creating a P2V image of a physical server
and moving that to a cloud environment.

b. Cloud bursting means leveraging a cloud provider to supply additional resources to meet additional load. In this
scenario, you could have a load balancer that will direct incoming web traffic either to internal or to cloud -based
systems depending on current load.

• APIs are typically the underlying communications method for components within a cloud, some of which are exposed to the
cloud user to manage their resources and configurations. Most cloud API's these days are REST ( Representational State
Transfer), which runs over the HTTP protocol, making it extremely well suited for internet services.

• If an attacker gets into management plane, they potentially have full remote access to your entire cloud deployment.

• The cloud is really what you make of it. You can simply migrate a server in a “like-for-like” fashion by performing a physical-
to-virtual (P2V) migration to create that very same system running in a cloud environment. You might think this would be
the easiest way to adopt cloud services, but the thing is, all you’ve really done is take a server that you have physical
control over and make a virtual image of it, so that now you are running a single point of failure in someone else’s physical
environment over which you have no control. The real power of the cloud is manifested when you understand and adopt
cloud-native models and adjust your architectures and controls to align with the features and capabilities of cloud
platforms. If architected properly, servers can be treated as cattle, not pets. (This expression is often used to describe a
pattern where servers are viewed as easily replaceable commodities, not “long-standing” resources, as they are in a
traditional environment.) The focus of workload management moves from “we need X servers” to “we need this
application to serve X concurrent connections.”

4 Logical Model :

CCSK Certificate Page 2


4 Logical Model :

Different security focuses map to the different logical layers,


Application Security maps to Applistructure
Data security maps to infostructure
Infrastructure security to infrastructure

The Key difference between cloud and traditional computing is the metastructure.
Metastructure This layer is the game-changing aspect of cloud. In this layer, you both configure and manage a cloud deployment
of any type. The single biggest thing you need to understand immediately about the difference between the cloud and tradition al
IT is the metastructure in your data center. It is within the metastructure logical layer that you build the virtual tools re quired for
a virtual world (the cloud).

• Most important security consideration is knowing exactly who is responsible for what in any given cloud project.

This Shared responsibility model directly correlates to two recommendation


1) Cloud provider should clearly document their internal security control
i. Tool : Consensus Assessments initiative Questionnaire (CAIQ)
2) Cloud user should for any given cloud project, build a responsibilities matrix.
i. Tool : Cloud Control Matrix (CCM)

their internal security controls and customer security features so

• Simple Cloud security process model


○ Identify Requirements
○ Select provider service and deployment models
○ Define Architecture
○ Assess Security Controls
○ Identify control Gaps
○ Design and implement Controls
○ Manage Changes

• There are two types of hypervisors of note: Type 1 hypervisors are installed directly onto the physical server (such as
VMware ESXi, Xen, or KVM), and Type 2 hypervisors are installed on top of the operating system already running on a
server (such as VMware Workstation, VMware Workstation Player, or Oracle VM VirtualBox). I can’t imagine any cloud
service provider using anything other than a Type 1 hypervisor.

STAR Registry
You should be aware of a couple of things about the whole STAR program. The CAIQ entries are considered “self assessments.”
Each self assessment is referred to as a “Level 1” STAR entry.
a. STAR Level 1: Self Assessment There is no oversight or third-party inspection regarding what is listed here and what is
actually the truth. That said, I like to think that no vendor would be careless enough to list “mistruths” in their STAR entry,
because this would eventually be discovered and the vendor would likely suffer tremendous reputational damage.

b. STAR Level 2: Third-Party Certification There are actually two ways providers can be listed as having achieved STAR Level 2:
STAR Certification or STAR Attestation. The STAR Certification requires that ISO 27001 requirements are followed in a cloud
environment being consumed. The STAR Attestation requires that Service Organization Control 2 (SOC 2) criteria are

CCSK Certificate Page 3


environment being consumed. The STAR Attestation requires that Service Organization Control 2 (SOC 2) criteria are
followed. Both require an independent third party having assessed the provider’s environment.
c. STAR Level 3: Continuous Auditing This level is a placeholder for future continuous audit. At this time, it is unavailable to
any providers because the standard is still being developed.

d. EXAM TIP Remember that the STAR Registry contains CAIQ entries that are filled out by vendors and uploaded to the Cloud
Security Alliance without any third-party review or assessment.

______________________________________________________________________________________________

Chapter 2
Governance and Enterprise Risk management

The Primary issue to remember when governing cloud computing is that an organization can never outsource responsibility for
governance, even when using external providers.

There are really four areas, or roles, that play a part in a strong governance and risk management program.
The roles involved are
a. governance,
b. enterprise risk management,
c. information risk management, and
d. information security.

Tools for Cloud Governance:


1) Contracts : The Primary tool of Governance is the contract between a cloud provider and a cloud customer.
2) Supplier ( Cloud provider) Assessments :
3) Compliance reporting : It includes all the documentation on a providers internal and external compliance
assessments

The legally binding contract agreement is your only “guarantee” of any level of service or commitment. Simply put, if it’s no t in
the contract, it doesn’t exist. Notice the quotes around the term “guarantee.”

Terms and Conditions This is the main document that describes aspects of the service, how customer data will be used,
termination clauses, warranties, applicable laws, and other fascinating items written by the provider’s lawyers to protect th em as
much as the law will allow.

The Cloud security Alliance STAR Registry is an assurance program and documentation registry for cloud provider assessments
based on the CSA cloud Controls Matrix and Consensus Assessments initiative Questionnaire.

The Cloud Risk management tools,


The Following process help form the foundation of managing risk in cloud computing deployments.
Supplier Assessment
• Request Documentation
• Review Security program
• Review legal Regulatory, industry, Contract obligations
• Evaluate Service based on Context of information assets involved
• Evaluate provider (Finances, reputation, insurers, outsources)

You can choose the third-party provider to which you will outsource the building and operational responsibilities of a cloud
environment, but you can never outsource accountability. This is why proper governance is critical to your firm at the highes t
levels.

CCSK Certificate Page 4


All of the standards have one issue in common, however—the scope of the engagement. Take International Standards
Organization/International Electrotechnical Commission (ISO/IEC) certification, for example. The scope of the ISO/IEC audit c ould
be only the IT department. Where does that leave you if you’re looking for a cloud provider with the ISO/IEC
certification that you want to use to make your provider selection decisions? It leads you to understand that merely being
“certified” doesn’t mean anything if the service you are consuming is not within the scope of the audit.

A provider being “PCI compliant” does not mean your applications are automagically “PCI compliant.” This is a perfect example of
the shared responsibility of all cloud models, and you will need to assess your applications if they are part of the PCI card holder
data environment.

SOC 1 This SOC report is used for Internal Control over Financial Reporting (ICFR) and is used for entities that audit financ ial
statements.

SOC 2 This SOC report is titled “Report on Controls at a Service Organization Relevant to Security, Availability, Processing
Integrity, Confidentiality, or Privacy.” It deals with ensuring that controls at an organization are relevant for security, a vailability,
and processing integrity of systems.

SOC 3 This publicly available high-level SOC report contains a statement from an independent CPA that a SOC engagement was
performed, plus the high-level result of the assessment (for example, it could indicate that the vendor statement of security
controls in place is accurate).

the AICPA thought it would be a great idea to have different types of report levels as well:
• Type 1 A point-in-time look at the design of the controls
• Type 2 An inspection of the operating effectiveness of the controls As you can see, you’ll probably want to receive and
review the Type 2 report because it actually tests the controls in question. The bottom line here is that as a security
professional, you want to get access to a provider’s SOC 2, Type 2, report. It will offer great detail into the security controls
in place, tests performed, and test results.

TIP Perhaps you find yourself in a situation with identified risk, and you want to transfer that risk to another party.
Cyberinsurance is often used in this case. Here’s the trick with insurance, though: it covers only primary damages, not secon dary
damages such as loss of reputation.

Remember that moving to the cloud doesn’t change your risk tolerance; it just changes how risk is managed.

According to the CSA Guidance, the likelihood of a fully negotiated contract is lower with PaaS than with either of the other
service models. That’s because the core driver for most PaaS providers is to deliver a single capability with very high effic iency.

NOTE Inflexible contracts are a natural property of multitenancy. Private Cloud Governance in a private cloud boils down to o ne
very simple question: which party owns and manages the private cloud? You could call a provider and have them spin up and
manage a private cloud for you, or you could have your people install the private cloud automation and orchestration software to
turn your data center into a “private cloud.” In the event that your company owns and operates the private cloud, nothing
changes. If, on the other hand, you have outsourced the build and management of your private cloud, you have a hosted private
cloud, and you have to treat the relationship as you would any other third-party relationship.

EXAM TIP If you are asked a question about governance in a private cloud, pay attention to who owns and manages the
infrastructure. An outsourced private cloud can incur much more change than insourced. Hybrid and Community Cloud With the
hybrid and community cloud deployment models, you have two areas of focus for governance activities: internal and the cloud.
With hybrid clouds, the governance strategy must consider the minimum common set of controls that make up the cloud service
provider’s contract and the organization’s internal governance agreements. In both hybrid and community models, the cloud use r
is either connecting two cloud environments or a cloud environment and a data center. For community clouds specifically,
governance extends to the relationships with those organizations that are part of the community that shares the cloud, not ju st
the provider and the customer. This includes community membership relations and financial relationships, as well as how to
respond when a member leaves the community.

Assessing Cloud Service Providers You are dealing with two types of risk management when you assess the cloud: the risk

CCSK Certificate Page 5


Assessing Cloud Service Providers You are dealing with two types of risk management when you assess the cloud: the risk
associated with use of third-party cloud service providers and the risk associated with the implementation of your systems in the
cloud.

_______________________________________________________________________________________________

Chapter 3

Legal Issues Contracts and Electronic Discovery

Australia : Two key Laws


1) Privacy Act of 1988 (Privacy Act)
2) Australian Consumer Law (ACL)

Japan :
1) Act on the protection of Personal Information (APPI)

EU :
1) GDPR : General Data Protection Regulation - 2016
GDPR become enforceable as of MAY 25, 2018.

GDPR requires companies to report that they have suffered a breach of security and breaches must be reported with 72 hours of
the company becoming aware of the incident.

U.S. Federal Laws


1) GLBA : Gramm-Leach-Bliley Act
2) HIPPA : Health insurance Portability and Accountability Act of 1996
3) COPPA : Children's Online Privacy Protection Act of 1998

Sometimes data processed by the company might be so sensitive or confidential that it should not be transferred to a cloud
service, or the transfer might require significant precautions. This might be the case, for example, for files that pertain t o high
stakes projects such as R&D road maps, or plans for an upcoming IPO (initial public offering) merger, or acquisition.

Usually, a Contract that cannot be negotiated is likely to lack some of the protections that the typical customer would need. In
This case, The customer should weigh the risks from foregoing these protections against potential benefits.

Forensics: Bit-by-bit imaging of a cloud data source generally difficult or impossible. For obvious security reasons, providers are
reluctant to allow access to their hardware, particularly in a multitenant environment where a client could gain access to ot her
clients data. Even in a private cloud forensics may be extremely difficult, and clients may need to notify opposing counsel o r the
courts of these limitations.

EXAM TIP Of the three models, you should get your head around the role of the controller/custodian and remember that
jurisdiction is very important to determine applicable laws.

The naming of this role is dependent on the location you’re in. In the United States, it’s called the “data custodian”; in Eu rope, it’s
called the “data controller.” Either way, this entity is legally accountable for properly securing end -user data. As an example of a
data custodian/controller, if your company uses an Infrastructure as a Service (IaaS) provider to store your customer data, y ou
are the data custodian/controller of that end-user data. The custodian/controller must operate in accordance with the laws of
the jurisdiction in which the company operates.

After all, only legal counsel has any authority to advise executives on the legal risks involved with anything your company d oes,
right? The bottom line is this: the location where an entity operates is critical knowledge that plays an important role in
determining due diligence requirements and legal obligations.

A treaty is an agreement between two political authorities.

You may have heard of the International Safe Harbor Privacy Principles, otherwise known as the Safe Harbor agreement, between

CCSK Certificate Page 6


You may have heard of the International Safe Harbor Privacy Principles, otherwise known as the Safe Harbor agreement, between
the United States and the European Union. This treaty basically allowed companies to commit voluntarily to protecting EU
citizens’ data stored in the United States the same way that it would protect the data if it were held in the European Union. This
agreement was terminated in 2015, however, and was replaced shortly afterward with a new agreement, the EU-US Privacy
Shield. Privacy Shield operates in much the same way as Safe Harbor, in that Privacy Shield allows for personal data transfer
and storage between the European Union and the United States.

Australia amended its 1988 Privacy Act in February 2017 to require companies to notify affected Australian residents and the
Australian Information Commissioner in the event of a security breach.
A breach of security must be reported under two conditions:
if there is unauthorized access or disclosure of personal information that would be likely to result in serious harm, or
if personal information is lost in circumstances where unauthorized access or disclosure is likely to occur —and if it
did occur, it would be likely to result in serious harm to any of the individuals to whom the information relates.

Cross-border data transfer restrictions Personal data cannot be transferred outside the EU/EEA to a processor or
custodian/controller that is located in a country that does similar protection of personal data and privacy rights. A company can
prove that it will be offering the “adequate level of protection” required by executing Standard Contractual Clauses (SCC), s igning
up to the EU-US Privacy Shield, obtaining certification of Binding Corporate Rules (BCRs), or complying with an approved industry
code of conduct or approved certification mechanism. In rare cases, the transfer may be allowed with the explicit, informed
consent of the data subject, or if other exceptions apply.

For a state breach disclosure law, I like to point out Washington State’s Breach Notification Law (enacted in 2015). This law
states that any breach that is reasonably expected to impact more than 500 Washington State residents must be reported to
the Washington State attorney general within 45 days following discovery.

Without periodic testing of both cloud services and your use of cloud services, you may be taking on unacceptable risk withou t
even knowing it.

if a judge deems data was purposefully deleted or otherwise destroyed, he or she may issue an instruction of “ adverse inference”
to the jury. This means the jury must consider the data as being purposefully deleted and will assume it contained worst -case
damaging evidence.

In order for evidence to be considered admissible in a court of law, it must be considered accurate and authenticated. This is true
regardless of where such evidence is held.

If data cannot be authenticated, it cannot be considered admissible evidence in a court of law (barring any extenuating
circumstances). The cloud does change how chain of custody is ensured. Take an example of a cloud provider that may allow you
to export data, but any metadata is stripped as part of the process. But the metadata may be required to validate that the da ta is
indeed genuine and therefore admissible in a court of law. Direct access may be impossible from both the customer and the Saa S
provider (for example) you have a contract with if the provider, in turn, is using a third-party IaaS to store and process data. After
all, the SaaS provider in this example is just another customer of the IaaS provider and may not have any access to the hardw are
or facilities. As such, in this example, a requesting party may need to negotiate directly with the IaaS provider for any acc ess.
Native Production When digital evidence is requested, it is expected to be produced in standard formats such as PDF or CSV. I f a
cloud provider can export data from their highly proprietary system in a proprietary format only, this data.

FRCP clause 26(b)(2)(B) permits data not being presented as evidence when it is not reasonably accessible. This may be
applicable, for instance, when a bit-level copy of a drive is required when the data is stored in a cloud environment

________________________________________________________________________________________________

Chapter 4
Compliance and audit Management

Both the cloud provider and customer have responsibilities, but the customer is always ultimately responsible for their own
compliance. These responsibilities are defined through contracts, audits/assessments and specifics of the compliance
requirements.

CCSK Certificate Page 7


requirements.

Cloud Customers, particularly in public cloud, must rely more on third party attestations of the provider to understand their
compliance alignment and gaps.

Pass-through Audits : A pass-through audit is a form of compliance inheritance. In this model all or some of the cloud provider's
infrastructure and services undergo an audit to a compliance standard.

Organizations have the same responsibility in traditional computing, but the cloud dramatically reduces the friction of these
potentially international deployments, e.g, a developer can potentially deploy regulated data in a non -compliant country without
having to request an international data center and sign off on multiple levels of contracts, should the proper controls not b e
enabled to prevent this.

It is important to remember that attestations and certifications are point-in-time activities. An attestation is a statement of an
over a period of time assessment and may not Compliance, audit and assurance should be continuous. They should not be seen
as merely point-in-time activities and many standards and regulations are moving more towards this model. This is especially true
in cloud computing, where both the provider and customer tend to be in more-constant flux and are rarely ever in a static state.

EXAM TIP Remember that audits are a key tool to prove or disprove compliance.

Compliance is both expensive and mandatory, but the costs associated with noncompliance could be devastating to a company,
and executives in a company charged with noncompliance could face jail time as a result.

Ownership of data Believe it or not, some cloud providers have clauses in their contracts that transfer ownership of any data
uploaded by a customer to the provider. In turn, the customer gets unlimited access to this data, but the provider is allowed to
do whatever they please with said data, including retaining it upon contract termination and/or selling it to others. This is more
common with “free” versions of SaaS products.

Continuous Compliance I want to take a moment to make a distinction here between continuous monitoring and continuous
auditing. Although they are often considered joint activities (some refer to it as CM/CA), they really aren’t the same thing. One
commonality they share is that “continuous” doesn’t necessarily mean real-time analysis. NIST defines Information Security
Continuous Monitoring (ISCM) thusly: “Security controls and organizational risks are assessed and analyzed at a frequency
sufficient to support risk-based security decisions to adequately protect organization.

Techniques to perform continuous auditing are referred to as computer-assisted audit techniques. The bottom line is that
“continuous” does not mean real-time 24×7×365. It addresses performing something at an appropriate frequency, and that is up
to the system owner.

The goal of the STAR Continuous is to address the security posture of cloud providers between point-in-time audits.

How do providers address this continuous compliance initiative? Automation, that’s how! Remember that cloud environments
are automated environments to begin with, so why can’t testing of the environment be automated as well and be executed by
the provider at an expected testing frequency between the point-in-time audits they currently provide to customers? Of course,
not everything can be automated, so additional manual activities will have to be performed as well, but through the use of th is
STAR Continuous approach, customers will be able to have greater confidence in the security controls in the provider’s
environment.

ISO 19011:2018 defines audit as a “systematic, independent and documented process for obtaining audit evidence [records,
statements of fact, or other information that is relevant and verifiable] and evaluating it objectively to determine the exte nt to
which the audit criteria [a set of policies, procedures, or requirements] are fulfilled.”

First are the terms “systematic” and “repeatable.”


Second, a key term regarding audits is “independent.”

Audits generally include some form of testing. The two types of testing are compliance testing and substantive testing.
Compliance testing is used to determine whether controls have been properly designed and implemented. This type of testing
would also include tests to determine whether the controls are operating properly. Substantive testing, on the other hand, lo oks

CCSK Certificate Page 8


would also include tests to determine whether the controls are operating properly. Substantive testing, on the other hand, lo oks
at the accuracy and integrity of transactions that go through processes and information systems.

Audits generally include some form of testing. The two types of testing are compliance testing and substantive testing.

a. Compliance testing is used to determine whether controls have been properly designed and implemented. This type of
testing would also include tests to determine whether the controls are operating properly.
b. Substantive testing, on the other hand, looks at the accuracy and integrity of transactions that go through processes and
information systems.

Most CSPs will use two primary audit standards to demonstrate that appropriate security is in place for their cloud services:
Service Organization Control (SOC) and International Standards Organization (ISO) standards.
a. SOC 1 is for financial reporting,
b. SOC 2 is for security controls,
c. Type 1 is a point-in-time look at control design, and
d. Type 2 looks at operating effectiveness of controls over a period of time), you should understand a few more items before
reviewing the SOC 2 reports supplied by your providers.

An attestation is a declaration that something exists or is true. Certification is an official document attesting to a status or level
of achievement.

SOC 2 report is a primary example of an attestation that’s widely used by CSPs. Primary examples of certifications used by CS Ps
are ISO/IEC 27001 and ISO/IEC 27017 (among others). Auditing changes dramatically in a cloud environment, because you will
most likely be consuming these aforementioned third-party attestations and certifications rather than performing your own
audits (remember these will be viewed as a security issue by the provider). In many instances, these attestation reports will be
available only under a nondisclosure agreement (NDA). This is the case with SOC 2 reports, for example. This is a condition o f the
AICPA itself, not the provider.

Without a right-to-audit clause, you will be reliant on published reports such as SOC and ISO/IEC (aka third-party audit).

The concept of immutable servers, which states that you don’t patch servers; rather, you update an image and replace the
vulnerable running server instance with the newly patched server image.

Attestations are legal statements from a third party. They are used as a key tool when customers evaluate and work with cloud
providers, because customers often are not allowed to perform their own assessments. Attestations differ from audits in that
audits are generally performed to collect data and information, whereas an attestation checks the validity of this data and
information to an agreed-upon procedure engagement (such as SOC).

_______________________________________________________________________________________________________

Chapter 5
Information Governance

Ensuring the use of data and information complies with organizational policies, standards and strategy including regulatory,
contractual and business objectives.

Aspects of having data stored in the cloud that have an impact on information and Data governance requirement
1) Multitenancy
2) Shared Security Responsibility
a. Ownership
b. Custodianship
3) Jurisdictional boundaries and Data Sovereignty.
4) Compliance, Regulations, and privacy policies.
5) Destruction and removal of data.

Cloud computing affects most data governance domains:


1) Information Classification

CCSK Certificate Page 9


1) Information Classification
2) Information Management policies.
3) Location and jurisdiction policies.
4) Authorization
5) Ownership
6) Custodianship
7) Privacy
8) Contractual controls
9) Security Controls

Data Security Life Cycle


SIX Phases
Create --> Store --> USE -- > Share -- > Archive -- > Destroy

Due to all the potential regulatory, Contractual and other Jurisdictional issues, it is extremely important to understand bot h the
logical and physical locations of data.

Entitlements: when users know where the data lives and how it moves, they need to know who is accessing it and how.
There are two factors.

a. Who accesses the data ?


b. How can they access it? ( Device and channel. )

Functions that performs with the data : 3 things that can do with given data
a. Read
b. Process
c. Store

FIPS 199 calls for use of high-water implementation, so if the impact to confidentiality is high but the impact to integrity and
availability is low, the data classification is considered to be high.

Data classification will often rely on metadata (data about the data) such as tags and labels that define the classification level of
the data and how it should be handled and controlled.
There are three types of data classification approaches out there today:
• User-based Data owners are expected to select the appropriate classification for a particular data set. For example,
a user sending an e-mail would be required to select a classification level in a pull-down menu in Microsoft Outlook
prior to it being sent.
• Content-based This classification approach inspects and interprets data looking for known sensitive data. In this
scenario, the classification program will scan a document, looking for keywords. Classification is automatically applied
based on the content of the document.
• Context-based This classification approach looks at application, storage location, or the creator of the data as an
indicator of sensitive information.

Regarding HIPAA specifically, companies that work with protected health information (PHI) such as healthcare providers and
insurance companies (referred to as covered entities) can use only providers that are “business associates” with which they h ave
a written business associate agreement. What are the potential damages for a covered entity storing data with a provider
without having a signed business associate agreement in place? Here’s an example: In 2016, North Memorial Health Care in
Minnesota was slapped with a $1.55 million fine for not having a business associate agreement in place with a provider.

Remember that extending information governance to include cloud services requires both contractual and security controls.

Security controls are considered a tool to implement data governance. Policies themselves don’t do anything to implement data
governance. Yes, they are absolutely needed, but they are statements (directive controls), not actual preventative controls t o
stop someone from doing something. Classification is also required for strong governance, but again, classification itself is n’t
going to stop an actor from performing a function.

Device is not considered as an actor


_______________________________________________________________________________________________________

CCSK Certificate Page 10


_______________________________________________________________________________________________________

Chapter 6
Management Plane and Business Continuity

Management plane :
a. The Management plane refers to the interfaces for managing your assets in the cloud.
b. API and web consoles are the way the management plane is delivered.
c. Cloud providers and platforms will also often offer software Development kits (SDKs) and command line interfaces to
make integration with their APIs easier.
d. The Management plane is the single most significant security difference between traditional infrastructure and cloud
computing. It is the interface to connect with the metastructure and configure much of the cloud.
e. The management plane is also where you implement most disaster recovery (DR) and business continuity planning
(BCP) options.
f. Proper implementation and management of restricting access to the management plane is job number 1 as far as
securing any cloud implementation.
g. The Cloud provider is responsible for ensuring the management plane is secure and necessary security features are
exposed to the cloud user. Such as granular entitlements to control that someone can do even if they have
management plane access.
h. The Cloud user is responsible for properly configuring their use of the management plane, as well as for securing and
managing their credentials.

Cloud platforms can be incredibly resilient, but single cloud assets are typically less resilient than in the case of traditi onal
infrastructure. This is due to the inherently greater fragility of virtualized resources running in highly -complex environments.

No matter the platform or provider there is always an account owner with super-admin privileges to manage the entire
configuration. This should be enterprise-owned (not personal), tightly locked down, and nearly never used.

All privileged user accounts should use multi-factor authentication (MFA). If possible, all cloud accounts should use MFA. It’s one
of the single most effective security controls to defend against a wide range of attacks. MFA is just as important for SaaS a s it is
important for IaaS.

On high level there are 5 major facets to building and managing a secure management plane
1) Perimeter Security
2) Customer authentication
3) Internal communication and credential passing
4) Authorization and entitlements
5) Logging, monitoring and alerting

BC/DR must account for the entire logical stack:


1) Metastructure : Cloud configurations should be backed up in a restorable format.
2) Software-Defined infrastructure : allows you to create template to configure all aspects of cloud deployment.
3) Infrastructure : Re-architect your deployment. Lift and shift doesn't enable to utilize full cloud features.
4) Infostructure : data synchronization over multiple locations
5) Applistructure : When real time switching isn't possible, design your application to gracefully fail in case of service
outage.

Chaos Engineering is often used to help build resilient cloud deployments. Chaos Engineering uses tools to selectively degrad e
portions of the cloud to continuously test business continuity.
This is often done in production, no just test environment, and forces engineers to assume failure instead of viewing it as o nly a
possible event.

SaaS may often be the biggest provider outage concern, due to total reliance on the provider.

A Security Key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows
the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key
works without the need for any special software drivers.

CCSK Certificate Page 11


Once a device is enrolled for a specific Web site that supports Security Keys, the user no longer needs to enter their
password at that site (unless they try to access the same account from a different device, in which case it will ask the user
to insert their key).

NOTE Recovery of a system doesn’t necessarily mean 100 percent recovery. It could mean getting the system back up to 60
percent capacity, or it could involve creating a brand new system (or a new normal, if you will).

BCP is about the people and operations, while DR is about technology.

_______________________________________________________________________________________________________

Chapter 7
Infrastructure Security

There are two major categories of network virtualization commonly seen in cloud computing today:
1) Virtual Local Area Network (VLANs): VLAN's leverage existing network technology implemented in most network
hardware.
2) Software defined Networking (SDN) : A more complete abstraction layer on top of networking hardware, SDN's
decouple the network control plane from the data place. This allows us to abstract networking from the traditional
limitations of a LAN.
There are multiple implementations, including standards-based and proprietary options. Depending on the
implementation, SDN can offer much higher flexibility and isolation. For example, multiple segregated overlapping IP
ranges for a virtual network on top of the same physical network. Implemented properly, and unlike standard VLANs,

SDNs provide effective security isolation boundaries. SDNs also typically offer software definition of arbitrary IP
ranges, allowing customers to better extend their existing networks into the cloud. If the customer needs the
10.0.0.0/16 CIDR (Classless Inter-Domain Routing) range, an SDN can support it, regardless of the underlying network
addressing. It can typically even support multiple customers using the same internal networking IP address blocks.

An SDN may use packet encapsulation so that virtual machines and oilier "standard" assets don't need any changes to
their underlying network stack. The virtualization stack takes packets from standard operating systems (OS)
connecting through a virtual network interface, and then encapsulates the packets to move them around the actual
network. The virtual machine doesn't need to have any knowledge of the SDN beyond a compatible virtual network
interface, which is provided by the hypervisor.

Traditional Network Intrusion Detection Systems, where communications between hosts are mirrored and inspected by the
virtual or physical Intrusion Detection Systems will not be supported in cloud environments; customer security tools need
to rely on an in-line virtual appliance, or a software agent installed In Instances. This creates either a chokepoint or
increases processor overhead, so be sure you really need that level of monitoring before implementing.

In Cloud environment, IP addresses will change far more quickly than on a traditional network, which security tools must
account for. Ideally they should identify assets on the network by a unique ID, not an IP address or network name.

Every networking appliance (such as a router or switch) contains three planes—management, control, and data—that perform
different functions.

Control plane This plane establishes how network traffic is controlled, and it deals with initial configuration and is
essentially the “brains” of the network device. This is where you configure routing protocols (such as Routing Information
Protocol [RIP], Open Shortest Path First [OSPF], and so on), spanning tree algorithms, and other signaling processes.

Data plane This is where the magic happens! The data plane carries user traffic and is responsible for forwarding packets
from one interface to another based on the configuration created at the control plane.

CCSK Certificate Page 12


On the positive side, software-defined networks enable new types of security controls, often making it an overall gain for
network security:
• Isolation is easier. It becomes possible to build out as many isolated networks as you need without constraints of physical
hardware. For example, if you run multiple networks with the same CIDR address blocks, there is no logical way they can
directly communicate, due to addressing conflicts. This is an excellent way to segregate applications and services of
different security contexts.

SDN Firewalls : SDN firewalls are typically policy sets that define ingress and egress rules that can apply to single assets or
groups of assets, regardless of network location (within a given virtual network). For example, you can create a set of
firewall rules that apply to any asset with a particular tag.

Many Network attacks are eliminated by default, such as ARP spoofing and other lower level exploits, beyond merely
eliminating sniffing. This is due to the inherent nature of the SDN and application of more software based rules and analysis
in moving packets.

The terms “segmentation” and “isolation” are often used interchangeably but are very different things. In a networking contex t,
segmentation refers to partitioning a network into smaller networks (a broadcast domain). In a broadcast domain, all systems
that are assigned to that VLAN will receive all broadcast traffic. Isolation, on the other hand, restricts communication to t he
intended destination machine only, as is the case in software defined networking (SDN).

MicroSegmentation
Microsegmentation (also sometimes referred to as hypersegregation) leverages virtual network topologies to run more, smaller,
and more isolated networks without incurring additional hardware costs that historically make such models prohibitive. Since the
entire networks are defined in software without many of the traditional addressing issues, it is far more feasible to run the se
multiple, software-defined environments

remember that VLANs have a maximum of 4096 addresses versus 16.7 million for VXLAN

Immutable Workloads
Auto-scaling and containers, by nature, work best when you run instances launched dynamically based on an image; those
instances can be shut down when no longer needed for capacity without breaking an application stack. This is core to the
elasticity of compute in the cloud. Thus, you no longer patch or make other changes to a running workload, since that wouldn' t
change the image, and, thus, new instances would be out of sync with whatever manut^ changes you make on whatever is
running. We call these virtual machines immutable.

Immutable workloads security benefits


a. You no longer patch running systems or worry about dependencies, broken patch processes
b. You can disable remote logins to running workloads
c. It is much faster to roll out updated versions.
d. It is easier to disable services and whitelist applications/ Processes since the instance should never change.
e. Most security testing can be managed during image creation, reducing need for vulnerability assessment.

The biggest “gotcha” in using an immutable approach? Manual changes made to instances. When you go immutable, you cannot
allow any changes to be made directly to the running instances, because they will be blown away with the instances. Any and a ll
changes must be made to the image, and then you deploy the image

Administrative access to the servers in an immutable environment should be restricted for everyone. Any required changes
should be made to an image, and that image is then used to build a new instance.

Multitenancy has the greatest impact on cloud security, and for this reason, cloud providers need to make sure they have
very tight controls over the isolation capabilities within the environment.

Changes to Workload Security Monitoring and Logging


Security logging/monitoring is more complex in cloud computing:

IP addresses in logs won't necessarily reflect a particular workflow since multiple virtual machines may share the same IP
address over a period of time, and some workloads like containers and serverless may not have a recognizable IP address at

CCSK Certificate Page 13


address over a period of time, and some workloads like containers and serverless may not have a recognizable IP address at
all. Thus, you need to collect some other unique identifiers In the logs to be assured you know what the log entries actually
refer to. These unique identifiers need to account for ephemeral systems, which may only be active for a short period of
time. Logs need to be offloaded and collected externally more quickly due to the higher velocity of change in cloud. You can
easily lose logs in an auto-scale group if they aren't collected before the cloud controller shuts down an unneeded instance.

Logging architectures need to account for cloud storage and networking costs. For example, sending all logs from instances
in a public cloud to on-premises Security Information and Event Management (SIEM) may be cost prohibitive, due to the
additional internal storage and extra Internet networking fees.

_______________________________________________________________________________________________________

Chapter 8
Virtualization and containers

Virtualization security in cloud computing still follows the shared responsibility model. The cloud provider will always be
responsible for securing the physical infrastructure and the virtualization platform itself. Meanwhile, the cloud customer is
responsible for properly implementing the available virtualized security controls and understanding the underlying risks, bas ed on
what is implemented and managed by the cloud provider. For example, deciding when to encrypt virtualized storage, properly
configuring the virtual network and firewalls, or deciding when to use dedicated hosting vs. a shared host.

The Primary security responsibilities of the cloud provider in compute visualization are to enforce isolation and maintain a secure
virtualization infrastructure.

The cloud provider is also responsible for securing the underlying infrastructure and the virtualization technology from exte rnal
attack or internal misuse. This means using patched and up-to-date hypervisors that are properly configured and supported with
processes to keep them up to date and secure over time. The inability to patch hypervisors across a cloud deployment could
create a fundamentally insecure cloud when a new vulnerability in the technology is discovered.

In addition, Cloud providers should assure customers that volatile memory is safe from un approved monitoring, since importan t
data could be exposed if another tenant, a malicious employee, or even an attacker is able to3 access running memory.

Cloud user Responsibilities

The Primary responsibility of the cloud user is to properly implement the security of whatever it deploys within the virtuali zed
environment.

Firstly the cloud user should take advantage of the security controls for managing their virtual infrastructure, including
a) Security Setting such as identity management to the virtual resources. [ this is identity management of who is allowed
to access the cloud management of the resource. ]
b) Monitoring and logging
c) Image asset management
d) Use of dedicated hosting [ In some situation you can specify that your assets run on hardware dedicated only to you,
even on a multitenant cloud]

Most cloud computing today uses SDN for virtualizing networks. (VLAN are often not suitable for cloud deployments since they
lack important isolation capabilities for multitenancy.

SDN abstracts the network management plane from the underlying physical infrastructure, removing many typical networking
constraints. For example. You can overlay multiple virtual networks, even ones that completely overlap their address ranges, over
the same physical hardware with all traffic properly segregated and isolated.

SAN uses its own protocols to do its job. These protocols are Fibre Channel, Fibre Channel over Ethernet (FCoE), Internet SCS I
(iSCSI), and InfiniBand. All of these are purpose-built to transfer blocks of data at high speeds and have higher throughput than
Transmission Control Protocol (TCP) networks do.

CCSK Certificate Page 14


The absolute top security priority is segregation and isolation of network traffic to prevent tenants from viewing another te nant’s
traffic. At no point should one tenant ever be able to see traffic from another tenant unless this is explicitly allowed by b oth
parties (via cross-account permissions, for example). This is the most foundational security control for any multitenant network.

Cloud provider responsibilities.

a. The absolute top security priority of Cloud provider is segregation and isolation of network traffic to prevent tenants from
viewing another's traffic. This is the most foundational security control for any multitenant network.

b. The provider should disable packet sniffing or other metadata "leaks" that could expose data or configuration between
tenants.

c. Packet sniffing, even within a tenant's own virtual networks, should also be disabled to reduce the ability of an attacker to
compromise a single node and use it to monitor the network, as is common on non-virtualized networks.

d. Tagging or other SDN-level metadata should also not be exposed outside the management plane or a compromised host
could be used to span into the SDN itself.

e. All virtual networks should enable built-in firewall capabilities for cloud users without the need for host firewalls or external
products.

f. The provider is also responsible for detecting and preventing attacks on the underlying physical network and virtualization
platform. This includes perimeter security of the cloud itself.

Cloud User Responsibilities

a. Cloud users are primary responsible for properly configuring their deployment of the virtual network, especially any virtual
firewalls.
b. The primary security responsibilities of the cloud provider in compute virtualization are to enforce isolation and maintain a
secure virtualization infrastructure.

For containers:
• Understand the security isolation capabilities of both the chosen container platform and underlying operating
system then choose the appropriate configuration.
• Use physical or virtual machines to provide container isolation and group containers of the same security
contexts on the same physical and/or virtual hosts.
• Ensure that only approved, known, and secure container images or code can be deployed.
• Appropriately secure the container orchestration/management and scheduler software stack(s).
• Implement appropriate role-based access controls and strong authentication for all container and repository
management.

Containers perform isolation only at the application layer. This is unlike a virtual machine that can offer isolation for all layers.
Repositories require appropriate controls to be put in place to restrict unauthorized access to the code and configuration fi les
held within.

_______________________________________________________________________________________________________

Chapter 9
Incidence response

An incident is defined as “an unplanned interruption to an IT service, or a reduction in the quality of service.” Failure of a
configuration item that has not yet affected service is also an incident.

CCSK Certificate Page 15


Incident response life cycle as per NIST
a. Preparation
b. Detection and Analysis
c. Containment, Eradication and Recovery
d. Post-mortem

Cloud Jump Kit : These are the tools needed to investigate in a remote location. For example, do you have tools to collect lo gs and
metadata from the cloud platform? Do you have ability to interpret the information?

For PAAS and serverless applications architecture you will likely need to add custom application-level logging.

Be aware that there are potential challenges when the information that is provided by a CSP faces chain of custody questions.
There are no reliable precedents established at this point.

Always factor in what the CSP can provide and whether it meets chain of custody requirements.

There is a greater need to automate many of the forensic/investigation processes in cloud environments, because of their
dynamic and higher-velocity nature. For example, evidence could be lost due to a normal auto-scaling activity or if an
administrator decides to terminate a virtual machine involved in an investigation. Some examples of tasks you can automate
include:
• Snapshotting the storage of the virtual machine.
• Capturing any metadata at the time of alert, so that the analysis can happen based on what the infrastructure looked like
at that time.
• If your provider supports it, "pausing" the virtual machine, which will save the volatile memory state.

You can't contain an attack if the attacker is still in the management plane. Attacks on cloud assets, such as virtual machin es, may
sometimes reveal management plane credentials that are then used to bridge into a wider, more serious attack.

Your first action when responding to an incident is ensuring that the attacker is no longer in the cloud management plane to
begin with. To ensure that you have complete visibility, this requires accessing the management plane with the master (root)
credentials. (Remember that this account is locked away and only to be used in the case of emergency. This is the time to use it!)
Using the master account may unmask activities that are hidden from view when you’re using a limited privilege administrator
account

_______________________________________________________________________________________________________

Chapter 10
Application Security

Threat Modeling Backgrounder


The goal of threat modeling as part of application design is to identify any potential threats to an application that may be
successfully exploited by an attacker to compromise an application. This should be done during the design phase, before a sin gle
line of code is written.

Elasticity : Elasticity enables greater use of immutable infrastructure. When using elastic tools like auto -scale groups, each
production system is launched dynamically based on a baseline image, and may be automatically deprovisioned without human
interaction.

Due to the range of frameworks and differences in terminology, the Cloud Security Alliance breaks these into larger "meta -
phases" to help describe the relatively standard set of activities seen across the frameworks.

•Secure Design and Development: From training and developing organization-wide standards to actually writing and testing
code.
•Secure Deployment: The security and testing activities when moving code from an isolated development environment into
production.
•Secure Operations: Securing and maintaining production applications, including external defenses such as Web

CCSK Certificate Page 16


•Secure Operations: Securing and maintaining production applications, including external defenses such as Web
Application Firewalls (WAF) and ongoing vulnerability assessments.

In cloud there are large changes in the visibility and control, when running in IaaS it might just be a lack of network logs, but as
you move into PaaS it may mean a loss of server and service logs. And, it will all vary based on provider and technology.

The serverless platforms may run on the provider's network with communications to the consumer's components through API or
HTTPS traffic. This removes direct network attack paths, even if an attacker compromises a server or container. The attacker is
limited to attempting API calls or HTTPS traffic and can't port scan, identify other servers, or use other common techniques.

Always remember that change management isn’t just about application changes. Any infrastructure and cloud management
plane changes should be approved and tracked

For now, here are two major concepts that serverless computing can deliver to increase security of our cloud environments:

• Software-defined security This concept involves automating security operations and could include automating cloud
incident response, changes to entitlements (permissions), and the remediation of unapproved infrastructure changes.

• Event-driven security This puts the concept of software-defined security into action. You can have a system monitoring
for changes that will call a script to perform an automatic response in the event of a change being discovered. For example,
if a security group is changed, a serverless script can be kicked off to undo the change. This interaction is usually performed
through some form of notification messaging. Security can define the events to monitor and use event-driven capabilities
to trigger automated notification and response.

EXAM TIP If you’re asked about the difference between software-defined security and event-driven security, remember
that software-defined security is a concept, whereas event-driven security puts that concept into action.

There is additional overhead from a security perspective with microservices, however, because communications between the
various functions and components need to be tightly secured. This includes securing any service discovery, scheduling, and
routing services.

_______________________________________________________________________________________________________

Chapter 11
Data Security Controls

Data security controls tend to fall into three buckets. We cover all of these in this section:
•Controlling what data goes into the cloud (and where).
•Protecting and managing the cat a in the cloud. The key controls and processes are:
• Access controls
•Encryption
•Architecture
•Monitoring/alerting (of usage, configuration, lifecycle state, etc.)
•Additional controls, including those related to the specific product/service/platform of your cloud provider, data loss
prevention, and enterprise rights management.

Enforcing information lifecycle management security.


•Managing data location/residency.
•Ensuring compliance, including audit artifacts (logs, configurations).
•Backups and business continuity,

Cloud Data Storage Types

Object Storage : It is similar to a file system, "Objects" are typically files, which are then stored using a cloud-platform
specific mechanism. Most access is through APIs, not standard file sharing protocols, although cloud providers may also
offer front-end interfaces to support those protocols.

CCSK Certificate Page 17


offer front-end interfaces to support those protocols.

Volume Storage : This is essentially a virtual hard drive for instances/Virtual machines.

Database : support a variety of different kinds of databases.

Most cloud platforms also use redundant, durable storage mechanisms that often utilize dato dispersion (sometimes also known
as data fragmentation of bit splitting). This process takes chunks of data, breaks them up, and then stores multiple copies o n
different physical storage to provide high durability. Data stored in this way is thus physically dispersed. A single file, f or example,
would not be located on a single hard drive.

Ensure that you are protecting your data as it moves teethe cloud. This necessitates understanding your provider's data migra tion
mechanisms, as leveraging provider mechanisms is often more secure and cost effective than "manual" data transfer methods
such as Secure File Transfer Protocol (SFTP). For example, sending data to a provider's object storage over an API is likely much
more reliable and secure than setting up your own SFTP server on a virtual machine in the same provider.

Access controls should be implemented with a minimum of three layers:


•Management plane: These are your controls for managing access of users that directly access the cloud platform's
management plane. For example, logging in to the web console of an laaS service will allow that user to access data in
object storage. Fortunately, most cloud platforms and providers start with default deny access control policies.

•Public and internal sharing controls: If data is shared externally to the public or partners that don't have direct access to
the cloud platform, there will be a second layer of controls for this access.

•Application level controls: As you build your own applications on the cloud platform you will design and implement your
own controls to manage access.

Frequently (ideally, continuously) validate that your controls meet your requirement, paying particular attention to any publ ic
shares. Consider setting up alerts for all new public shared or for changes in permissions that allow public access.

Encryption and tokenization are two separate technologies.

Encryption protects data by applying a mathematical algorithm that "scrambles" the data, which then can only be recovered by
running it through an unscrambling (decryption) process with a corresponding key. The result is a blob of ciphertext.

Tokenization, on the other hand, takes the data and replaces it with a random value. It then stores the original and the
randomized version in a secure database for later recovery.

"Tokenization is often used when the format of the data is important (e.g. replacing credit card numbers in an existing syste m
that requires the same format text string). Format Preserving Encryption encrypts data with a key but also keeps the same
structural format as tokenization, but it may not be as cryptographically secure due to the compromises.

There are three components of an encryption system: data, the encryption engine, and key management. I he data is, of course,
the information that you’re encrypting. The engine is what performs the mathematical process of encryption. Finally, the key
manager handles the keys for the encryption\he overall design of the system focuses on where to put each of these components.

When designing an encryption system, you should start with a threat model. For example, do you trust a cloud provider to
manage your keys? How could the keys be exposed? Where should you locate the encryption engine to manage the threats you
are concerned with?

i here are four potential options for handling key management:


•HSM/appliance: Use a traditional hardware security module (HSM) or appliance-based key manager, which will typically
need to be on-premises, and deliver the keys to the cloud over a dedicated connection.
•Virtual appliance/software: Deploy a virtual appliance or software-based key manager in the cloud.
•Cloud provider service: This is a key management service offered by the cloud provider. Before selecting this option, make
sure you understand the security model and SLAs to understand if your key could be exposed.
•Hybrid: You can also use a combination, such as using a HSM as the root of trust for keys but then delivering application-
specific keys to a virtual appliance that's located in the cloud and only manages keys for its particular context.

CCSK Certificate Page 18


specific keys to a virtual appliance that's located in the cloud and only manages keys for its particular context.

IAAS Encryption
Volume Storage Encryption
a. Instance managed encryption
b. Externally managed encryption
Object and file Storage
a. Client side encryption [Encrypt the data using an encryption engine embedded in the application of client]
b. Server-side encryption
c. Proxy Encryption

Paas Encryption
Application layer encryption : data encrypted in the PaaS application
Database encryption
Other

SaaS Encryption
Provider-managed encryption
Proxy encryption

_______________________________________________________________________________________________________

Chapter 12
Identity access management

Gartner Defines IAM as "the security discipline that enables the right individuals to access the right resources at the right times
for the right reasons."

Authoritative source : the Root source of an identity, such as the directory server that manages employee identities.

Identity provider : The source of the identity in federation. The identity provider isn't always the authoritative source, bu t can
sometimes rely on the authoritative source, especially if It is a broker for the process.

Relying party : the system that relies on an identity assertion from an identity provider.

IAM standards
1) SAML [ Security Assertion Markup Language] 2.0 is an OASIS standard for federated identity.
Security Assertion Markup Language Backgrounder SAML is the open standard protocol to use when you want to
enable users to access cloud services using a web browser. Currently at version 2.0, SAML does both authorization
and authentication. SAML has built-in security, but this doesn’t mean it’s a failsafe protocol (again, nothing is failsafe
when it comes to security). There are three components involved in SAML: The identity provider (your organization),
aka IdP; the service provider (cloud provider), aka SP; and the principal (usually a human). SAML can be used in either
an SP-initiated or an IdP-initiated scenario.

SAML is primarily the standard that is used for users accessing a cloud provider with a web browser. So how do
systems and mobile apps perform federation and SSO? That’s usually where OAuth and OpenID come in.

a. It uses XMl to make assertions between an identity provider and relying party.
b. Assertions can contain authentication statements, attribute statements, and authorization decision statements.

2) Oauth : is an IETF standard for authorization.


a. Designed to work over HTTP and currently on Version 2.0.
b. Version 2.0 is not compatible with 1.0

3) OpenID : is standard for federated authentication that is widely supported or web services.
OpenID Connect builds authentication on top of the authorization capability of Oauth
The latest version of the OpenID standard (version 3) is OpenID Connect (OIDC).

CCSK Certificate Page 19


The latest version of the OpenID standard (version 3) is OpenID Connect (OIDC).

a. Based on HTTP with URLs used to identify the identity provider and user identity.

b. the problem that OIDC solves is that it “lets app and site developers authenticate users without taking on the
responsibility of storing and managing passwords in the face of an Internet that is well-populated with people
trying to compromise your users’ accounts for their own gain.”

c. OIDC is an authentication layer built on top of OAuth 2.0. This enables OIDC to address authentication and
authorization requirements. OIDC is restricted to HTTP and uses JSON as a data format (OpenID 2.0 used XML).
The addition of OIDC’s authentication layer on top of OAuth’s authorization capabilities allows for verification
of the identity of an end user based on the authentication performed by an authorization server.

4) eXtensible Access Control Markup language (XACML)


a. Is a standard for defining attribute-based access controls/Authorizations
b. It is a policy language for defining access controls at a policy decision point.
c. It can be used with both SAMl and Oauth since it solves a different part of the problem.

5) System for Cross-domain identity management (SCIM)


a. Is a standard for exchanging identity information between domains
b. It can be used for provisioning and deprovisioning accounts in external systems and for exchanging attribute
information.

There is no “one-size-fits-all” standard when it comes to federation protocols. You have to consider the use case you’re trying to
solve. Are you looking at users logging in via a web browser? You might want to look at SAML. Are you looking at delegated
authorization? Then you might want to consider OAuth instead.

The main reason why you would perform federated identity with a cloud service provider is to perform SSO.

As a consumer, the goal of federation is simply to authenticate your users locally and authorize them remotely.

Identity providers don't need to be located only on-premises: many cloud providers now support cloud-based directory servers
that support federation internally and with other cloud services.

Multifactor authentication (MFA) offers one of the strongest options for reducing account takeovers. It isn’t a panacea, but
relying on a single factor for cloud services is very high Risk. When using MFA with federation, the identity provider can an d
should pass the MFA status as an attribute to the relying party.

Hard tokens are physical devices that generate one time passwords for human entry or need to be plugged into a reader.
These are the best option when the highest level of security is required.

Soft tokens : application run on phone or computer

Out of band password : SMS [ Message interception threat should be considered]

Biometrics

For Customers, FIDO is one standard that may streamline stronger authentication for consumers while minimizing friction.

Fast identity online (FIDO) standards are authentication protocols where security and user experience meet.
FIDO standards, developed by an open industry association – the FIDO Alliance, offer more security than passwords alone or one-
time passcodes and allow for fast, secure, and stronger authentication.

It encompasses several authentication techniques like biometric scans, iris scans, voice recognition, or facial recognition. FIDO
also facilitates existing authentication solutions such as security tokens, smart card authentication, near field communicati on
(NFC), and more.

CCSK Certificate Page 20


An Entitlement maps identities to authorization and any required attributes (e.g. user x is allowed access to resource y when z
attributes have designated values). We commonly refer to a map of these entitlements as an entitlement matrix.

Entitlements are often encoded as technical policies for distribution and enforcement.

Cloud Platforms tend to have greater support for the attribute-based Access Control (ABAC) model for IAM, which offers greater
flexibility and security than Role-based Access Control (RBAC) model. RBAC is the traditional model for enforcing authorization
and relied on what is often a single attribute (a defined role).

ABAC allows more granular and context aware decisions by incorporating multiple attributes, such as role, location,
authentication method, and more.

ABAC is the preferred model for Cloud-based access management.

XACML uses the concepts of policy decision and policy enforcement points. XACML is used for more fine -grained access control
decisions and can work with SAML or OAuth. XACML implementations are rare.

SAML is XML-based and handles both authentication and authorization. OAuth only deals with “AuthOrization” (memory trick),
and OpenID only deals with authentication.

Remember that a major benefit of SecaaS is the ability to enforce your policy using someone else’s infrastructure

Remember that encryption breaks SaaS

Hadoop is fairly synonymous with big data. In fact, it is estimated that roughly half of Fortune 500 companies use Hadoop for big
data, so it merits its own backgrounder. Believe it or not, what we now know as big data started off with Google trying to cr eate a
system they could use to index the Internet (called Google File System). They released the inner workings of their invention to the
world as a whitepaper in 2003. In 2005, Doug Cutting and Mike Cafarella leveraged this knowledge to create the open source bi g
data framework called Hadoop. Hadoop is now maintained by the Apache Software Foundation.

MapReduce This is the processing part of Hadoop. MapReduce is a distributed computation algorithm—its name is actually a
combination of mapping and reducing. The map part filters and sorts data, while the reduce part performs summary operations.
Consider, for example, trying to determine the number of pennies in a jar. You could either count these out by yourself, or y ou
could work with a team of four by dividing up the jar into four sets (map function) and having each person count their own an d
write down their findings (reduce).

Serverless computing can be considered an environment in which the customer is not responsible for managing the server. In th is
model, the provider takes care of the servers upon which customers run workloads. The CSA defines serverless computing as “th e
extensive use of certain PaaS capabilities to such a degree that all or some of an application stack runs in a cloud provider ’s
environment without any customer-managed operating systems, or even containers.”

Serverless computing is a bit of a misnomer since there is always a server running the workload someplace, but those servers and
their configuration and security are completely hidden from the cloud user. The consumer only manages settings for the servic e,
and not any of the underlying hardware and software stacks.

Serverless places a much higher security burden on the cloud provider. Choosing your provider and understanding security SLAs
and capabilities is absolutely critical. • Using serverless, the cloud user will not have access to commonly used monitoring and
logging levels, such as server or network logs. Applications will need to integrate more logging, and cloud providers should
provide necessary logging to meet core security and compliance requirements.

_______________________________________________________________________________________________________

Chapter 13
Security as service : Secaas

CCSK Certificate Page 21


Security as service : Secaas

Potential benefits and concerns of Secaas,


a. Cloud computing benefits
b. Staffing and expertise
c. Intelligence-sharing
d. Deployment flexibility
e. Insulation of clients
f. Scaling and Cost

Potential Concerns
a. Lack of Visibility
b. Regulation differences
c. Handling of regulated data
d. Data leakage
e. Changing providers
f. Migration of SecaaS

The correct answer is True. Due to compliance with recent legislative and administrative requirements around the world, tight er
integration between legal and IT departments is required when adopting cloud services. Although domain 3 addresses this
requirement in multiple sections, page 36 (Introduction) clearly states the following: "To address your specific issues, you should
consult with legal counsel in the jurisdiction(s) in which you intend to operate and/or in which your customers reside".

From <https://intrinsecsecurity.com/quiz/ccsk-public-quiz/>

Both the CCM and GDPR are listed on page 35 of the guidance as standards that can be leveraged to create a cloud governance
framework.

From <https://intrinsecsecurity.com/quiz/ccsk-public-quiz/>

Third party attestation is


Legal statements to communicate the results of an assessment or audit. As referenced on page 29 of the guidance.

Attestations and certifications are valid for what duration?

Ans : No validity
Certifications are point-in-time activities. There is no assurance these will be valid at any future point. Page 58 includes the
following statement: "It?s important to remember that attestations and certifications are point-in-time activities. An attestation
is a statement of an ?over a period of time? assessment and may not be valid at any future point.

From <https://intrinsecsecurity.com/quiz/ccsk-public-quiz/>

Privacy laws either cover everyone in a country/Region such as the GDPR which are omnibus, or address partucular
industries/occupations such as HIPPA

SaaS required well negotiated contracts

" Reciprocity" suggests both countries honor each other laws.

International laws creates a legal obligation not a contractual obligation.

"seizure" us typically only performed by law enforcement entities, government bodies or court, not by opposing counsel during
litigation.

CSA recommends information from the sedona conference for those interested in evidentiary matters related to electronic data.

Virtualized and Distributed aspects of cloud computing make traditional audit approaches difficult

CCSK Certificate Page 22


Logs are considered as compliance artifacts

SAAS cloud customer typically have an admin tab on the user panel with the management plane in SAAS model.

Websonsoles for accessing the cloud management plane are managed by the cloud provider.

Subpoena is defined as the process by which an opposing party may obtain private documents for use in litigation.

Storage as a service is considered as a sub offering of IAAS.

VM hopping is an isolation failure in which an attacker moves from one virtual machine (VM) to another, which they are not
intended to access. This would likely be related to a failure or compromise of the underlying hypervisor that should be provi ding
VM isolation.

If you are presented with any questions on OVF on the CCSK exam, remember that portability is the most important element of
OVF.

Controls. Essentially, a control is what we use to restrict a list of possible actions down to allowed actions. For
example, encryption can be used to restrict access to data, application controls to restrict processing via authorization, and
DRM storage to prevent unauthorized copies/accesses

In management infrastructure Primary responsibility of

a. Cloud provider :
The cloud provider is primarily responsible for building a secure network infrastructure and
configuring it properly. The absolute top security priority is segregation and isolation of network
traffic to prevent tenants from viewing another’s traffic.

b. Cloud user Responsibility


Cloud users are primarily responsible for properly configuring their deployment of the virtual
network, especially any virtual firewalls.

CCSK Certificate Page 23

You might also like