CCSP Ebook
CCSP Ebook
CCSP®
Maintenance and
Endorsement
Candidates’ Exam
Profile Information
Certifying Body
Founded in 1989
Nonprofit organization
Certifications
Certifying Body
(ISC)2 Responsibilities
Certifications
Common Body of
Knowledge (CBK)
Exams
Accreditation
CCSP® Certification
(ISC)2
CSA
6
Benefits of CCSP® Certificate
Adheres to the
Is accredited to ANSI Common Body of
Knowledge
10
Registration
11
Examination Information
Language:
Number of questions:
125 English
12
Examination Weights
Domains Weight
Cloud Concepts, Architecture, and Design 17%
Cloud Data Security 19%
Cloud Platform and Infrastructure Security 17%
Cloud Application Security 17%
Cloud Security Operations 17%
Legal, Risk, and Compliance 13%
Total 100%
NOTE:
CCSP Maintenance:
AMF rate for CCSP: US$125
This new AMF rate is revised and effective from July 1, 2019.
13
Endorsement Process
01 Recertification (3 years)
Fee ($125) 03
Audit Notice*
Candidates who pass are randomly selected and audited by (ISC)² Member Services prior to issuance of any
certificate. Multiple certifications result in a candidate being audited more than once.
Course Objectives
Domain 1 Domain 4
Cloud Concepts, Architecture, and Cloud Application Security
Design Requirements
Domain 2 Domain 5
Cloud Data Security Cloud Security Operations
Domain 3 Domain 6
Cloud Platform and Infrastructure Legal, Risk, and Compliance
Security
Course Highlights
6 Domains
Revised curriculum
Real-world scenarios
7 Case studies
Knowledge check
Course-end assessment
Certified Cloud Security Professional (CCSP®)
Integrity Availability
Security Concepts
Confidentiality
Integrity
Availability
Security Concepts
Confidentiality
The Integrity principle asserts that information and
functions can be added, altered, or removed only by
authorized people and means.
Integrity
Availability
Security Concepts
Confidentiality
Integrity
Availability
Security Concepts
▪ Job rotation
▪ Mandatory vacations
▪ Dual control
▪ Split knowledge
▪ Need-to-know principle
Security Concepts
Defense in Depth
▪ Administrative controls
Application
▪ Logical/Technical controls
Host
Data
Physical controls
Internal Networks
▪
Perimeter
Physical
Security controls are the counter measures taken to safeguard an information system from attacks
against confidentiality, integrity, and availability.
▪ Technical/Logical Controls: Implemented by using software, hardware, or firmware that restricts access to
information system.
Examples: Firewall, router, and encryption.
▪ Physical Controls: Implemented by installing fences and locks, and hiring security.
Security Control Functionalities
Types of security control:
NIST SP 800-145
Business Drivers for Cloud Computing
• Mobility
Scalability is based on the need and investment. The customer can increase or decrease
computing resources such as storage, computing power, and network bandwidth dynamically.
Scaling can be either vertical (scaling up) or horizontal (scaling out).
Vertical Scaling:
Horizontal Scaling:
Cloud Computing Concepts
Elasticity refers to the ability of a service to scale in and out depending on demand.
For example, a website might be hosted on a single virtual machine, and as more users connect
to the website, one or more virtual machines can be automatically brought online to handle the
load.
Cloud Computing Concepts
Vendor lock-in occurs when a customer is unable to leave, migrate, or transfer to an alternate
provider due to technical or non-technical constraints.
Vendor lock-out occurs when a customer is unable to recover or access their own data due to
the cloud provider going into bankruptcy or otherwise leaving the market.
Cloud Computing Concepts
• Cost-effective
• Easy to utilize
• Reliable
• Easy outsource
Security
Security Audit
Privacy
Resource Abstraction and
Control Layer Provisioning/
Privacy Impact Configuration
Audit Physical Resource Layer Service
Arbitrage
Hardware Portability/
Performance Interoperability
Audit Facility
Cloud Carrier
Cloud Computing Roles
Cloud Consumer A person or organization that uses service from Cloud Providers.
Cloud Provider A person, organization, or entity responsible for making a service available
to interested parties.
Cloud Auditor A party that conducts independent assessment of cloud services,
information system operations, and performance and security of the cloud
implementation.
Cloud Broker An entity that manages the use, performance and delivery of cloud
services, and negotiates relationships between Cloud Providers and Cloud
Consumers.
Cloud Carrier An intermediary that provides connectivity and transport of cloud services
from Cloud Providers to Cloud Consumers.
Regulators The entities that ensure organizations are in compliance with the
regulatory framework. These can be government agencies, certification
bodies, or parties to a contract.
Cloud Actors
Cloud broker
Cloud auditor
Infrastructure as a Service (IaaS)
“The capability provided to the consumer is to provision processing, storage, networks, and
other fundamental computing resources where the consumer is able to deploy and run arbitrary
software, which can include OSs and applications. The consumer does not manage or control
the underlying cloud infrastructure but has control over OSs, storage, and deployed
applications; and possibly limited control of select networking components (e.g., host firewalls).”
- NIST SP 800-145
Infrastructure as a Service
• Provides GUI and API access to • Encapsulates CPU processing time • Provides intra-, inter-, and extra-
infrastructure configuration and and RAM working space cloud communications
reporting • Implemented by hypervisors, • Maybe virtualized within a
• Implemented as a stand-alone containers and bare metal hypervisor, or carefully configured
application, integrates with • Must isolate different users’ as with bare metal
underlying cloud components workloads • Must isolate different workloads’
• Must robustly control access communications
through strong authentication and
authorization
Storage Database
• High availability
• Metered usage
• Multitenancy
• Co-location
• Hypervisor security and attacks
• Network security
• Virtual machine attacks
• Virtual switch attacks
• Denial-of-Service attacks (DoS)
Platform as a Service (PaaS)
“The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or
acquired applications created using programming languages, libraries, services, and tools supported by
the provider. The consumer does not manage or control the underlying cloud infrastructure, including
network, servers, OSs, or storage, but has control over the deployed applications and possibly
configuration settings for the application-hosting environment.”
- NIST SP 800-145
Platform as a Service (PaaS)
• Performs auto-scaling
• Cost-effective
• Access is easy
System Isolation User Permissions User Access Malware, Trojans, and Backdoors
Software as a Service (SaaS)
“The capability provided to the consumer is to use the provider’s applications running on a cloud
infrastructure. The applications are accessible from various client devices through either a thin client
interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not
manage or control the underlying cloud infrastructure including networks, servers, operating systems,
storage, or even individual application capabilities, with the possible exception of limited user-specific
application configuration settings.”
- NIST SP 800-145
Software as a Service (SaaS)
• Licensing
• Standardization
Web Application Security User Permissions User Access Malware, Trojans, Backdoors
Cloud Service Categories
Customer
Managed
Applications Applications Applications Applications
Customer
Managed
Security Security Security Security
Provider Managed
Databases Databases Databases Databases
Provider Managed
Customer
Provider Managed
Virtualization Virtualization Virtualization Virtualization
Servers Servers Servers Servers
Storage Storage Storage Storage
Networking Networking Networking Networking
Data Centres Data Centres Data Centres Data Centres
Cloud Service Categories and Its Application
Google AppEngine
Database Messaging Queuing IAM/Auth
Integration and Middleware Force.com
GoGrid
Mgmt CloudCentre API
APIs
Amazon API
Core Connectivity and IPAM/ LB and
Security IAM/Auth
Delivery DNS Transport
PaaS
IaaS
SaaS
Abstraction
Grid/
VMM Cluster/ Images Amazon EC2
Hardware Utility GoGrid
FlexiScale
Compute Network Storage
Facilities
Public Cloud
“The cloud infrastructure is provisioned for open use by the general public. It may be owned,
managed, and operated by a business, academic, or government organization, or some
combination of them. It exists on the premises of the cloud provider.”
- NIST SP 800-145
Cloud Deployment Models
Private Cloud
“The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple
consumers (e.g., business units). It may be owned, managed, and operated by the organization, a
third party, or some combination of them, and it may exist on or off premises.”
- NIST SP 800-145
Cloud Deployment Models
Hybrid Cloud
“The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private,
community, or public) that remain unique entities, but are bound together by standardized or proprietary
technology that enables data and application portability (e.g., cloud bursting for load balancing between
clouds).”
- NIST SP 800-145
Cloud Deployment Models
Community Cloud
“The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from
organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance
considerations). It may be owned, managed, and operated by one or more of the organizations in the
community, a third party, or some combination of them, and it may exist on or off premises.”
- NIST SP 800-145
Cloud Deployment Models
Service
IaaS PaaS SaaS Models
Deployment
Public Private Hybrid Community
Models
Comparison of Cloud Deployment Models
Security Most secure option Very secure Moderately secure Very secure
Business problem: A few weeks later, employees still face latency and downtime issues.
Business problem: Offloading HR workloads requires new connection to its private data
center. New connection requires increase in bandwidth with their WAN provider, who is
charging more.
Solution: IT department decides to move its assets to AWS facility. This allows direct
connection to AWS rather than going through a network loop.
Outcome: Enterprise saves 40% bandwidth cost for EC2 and finds network provider
within the same facility. Hence, enterprise meets the requirements at low cost for their
overall corporate WAN.
Cloud Security Vulnerabilities
TOCTTOU
Applications
OS
Rootkits
Hardware Hypervisor vulnerabilities
Trojans
Cloud Technology Roadmap
• Interoperability • SLAs
• Portability • Auditability
• Security
• Privacy
• Resiliency
• Performance
• Governance
Impact of Related Technologies
Artificial Intelligence
Artificial intelligence (AI) is a broad concept that addresses the use of computers to mimic the
cognitive functions of humans.
Artificial intelligence is steadily making its way into enterprise applications in areas such as
customer support, fraud detection, and business intelligence.
Impact of Related Technologies
Machine Learning
Machine Learning is a subset of AI and focuses on the ability of machines to automatically learn and
improve from experience.
Cloud Computing provides two basic prerequisites for running an AI system efficiently. It is
economically scalable and uses low cost resources as well as processing power to crunch huge amount
of data.
Amazon Web Services, for example, supports machine learning using AWS' algorithms to read native
AWS data (such as RDS, Redshift, and S3). Google supports predictive analysts with its Google Prediction
API, and Microsoft provides an Azure machine-learning service.
Impact of Related Technologies
Blockchain can be defined as a public ledger network Each block in the blockchain is cryptographically
for secure online transactions with virtual currencies. linked to the previous block after validation and
Transaction records are encrypted by using undergoing a consensus decision. Blockchain
cryptographic methods and executed in a distributed networks typically generate an enormous number of
computer network as blockchain software. transactions.
Blockchain
Containers
Containers provide a standard way to package an application's code, configurations, and
dependencies into a single object. This creates an isolation boundary at the application level rather
than at the server level.
Isolation allows container-based applications to be deployed easily and consistently, regardless of
whether the target environment is a private data center, the public cloud, or even a developer’s
personal laptop.
Impact of Related Technologies
Containers
If anything goes wrong in that single container (for example, security breach and excessive
consumption of resources by a process) it only affects that individual container and not the whole VM
or whole server.
Each container runs as a separate process that shares the resources of the underlying operating
system.
Impact of Related Technologies
Containers benefits:
Run anywhere: Containers package the code with the configuration files and dependencies which it
requires to consistently run in any environment. This also makes it easier for developers to test softwares
across multiple environments.
Improve resource utilization: Containers are able to operate with the minimum amount of resources to
perform the task they were designed for; this can mean just a few pieces of software, libraries, and the
basics of an OS. This results in two or three times as many containers being able to be deployed on a server
than virtual machines.
Scale quickly: An orchestration system, such as Google Kubernetes, is capable of dynamically adjusting and
adapting to the changing needs, when the quantity of containers need to scale out. It can replicate
container images automatically and can remove them from the system.
Impact of Related Technologies
Quantum Computing:
Quantum computers is the next generation of computing. Unlike traditional computers, they
derive their computing power by harnessing the power of quantum physics.
Though there has been rapid strides in quantum computing, we are still quite some distance away
from creating a commercial and usable quantum computer.
Impact of Related Technologies
Quantum Computing:
Given that quantum computing will have the capability to solve problems in seconds,
quantum computing poses a significant threat to the sustainability of encryption.
Encryption
Data in transit focuses on information or data while in transmission across systems and components,
and across internal and external (untrusted) networks.
When the information is traversing through trusted and untrusted networks, the opportunity for
interception, sniffing, or unauthorized access is heightened.
Cryptography
Data in transit
• Data transiting from an end user endpoint on the Internet to a web-facing service in the cloud.
• Data moving between machines within the cloud, such as between a web virtual machine (VM), and a
database.
Data at rest
• A secret key is also called as a symmetric key, since the same key is
required for encryption and decryption, or for integrity value generation
and integrity verification.
• The private key is used by the owner of the key pair, is kept secret, and should be protected at
all times.
Source Authentication: Key management commands and associated data are protected from spoofing.
Integrity Protection: Key management commands and associated data are protected from undetected
and unauthorized modifications.
Confidentiality: Secret and private keys are protected from unauthorized disclosure.
Metadata Protection: All keys and metadata are protected from spoofing and unauthorized
modifications.
Encryption Key Protection: Encryption keys must be secured at the same level of control, or higher, as
they protect the data.
Approaches to Key Management
Identity and access management ensures and enables the right individuals to access the right
systems and data at the right time under the right circumstances.
Key phases:
User provisioning standardizes, streamlines, and creates an efficient account creation process while
creating a consistent, measurable, traceable, and auditable framework for providing access to end users.
Deprovisioning is the process whereby a user account is disabled when the user no longer requires access
to the cloud-based services and resources.
This is not just due to a user leaving the organization but may also be due to a user changing a role,
function, or department.
Centralized Directory Services
Examples:
• Lightweight Directory Access Protocol (LDAP)
• Microsoft Active Directory (AD)
Privileged User Management
Compromised privilege account leads the attacker to access the resources and
negatively affect the organization.
To avoid the attack and access to a privilege account by a hacker, segregation of duties
(a risk reduction technique) becomes a necessity.
Authorization and Access Management
Data remanence is the residual representation of digital data that remains even after attempts have been
made to remove or erase the data.
Features of virtualization:
• It enables a single hardware equipment to run multiple operating system environments simultaneously,
which enhances processing power utilization.
• The hypervisor program controls the execution of the various guest operating systems and provides the
abstraction level between the guest and host environments.
Virtualization
VM1 VM2
Host OS Hypervisor
Hardware Hardware
• Load balancing
• High availability
• Portability
• Cloning
Hypervisor Attack
Hyperjacking
Installation of a rogue hypervisor can take complete control of a server such as SubVir, Blue Pill
(hypervisor rootkit using AMD secure virtual machine), Vitriol (hypervisor rootkit using Intel VT-x),
and direct kernel structure manipulation.
VM escape
It is performed by allowing OS of VM to break out and interacts directly with the hypervisor.
Running an arbitrary code on the host OS allowing malicious VMs to take complete control of the
host OS.
Common Threats
• Validate parameters
• Apply explicit threat detection
• Turn on SSL
• Apply rigorous authentication and authorization
• Use proven solutions
Advanced Persistent Threats
CSPs share infrastructure, platforms, and applications among tenants and potentially with other
providers which can include the underlying components of the infrastructure which results in
shared threats and vulnerabilities.
A defense-in-depth strategy should include compute, storage, network, application, and user
security enforcement and monitoring.
Design Principles of Secure Cloud Computing
OWASP
• Injection
• Broken Authentication
• Sensitive Data Exposure
• XML External Entities (XXE)
• Broken Access Control
• Security Misconfiguration
• Cross-Site Scripting (XSS)
• Insecure Deserialization
• Using Components with Known Vulnerabilities
• Insufficient Logging and Monitoring
Payment Card Industry Data Security Standard
Visa, MasterCard, and American Express established PCI DSS as a security standard
to which all organizations or merchants that accept, transmit, or store cardholder
data, of the size or number of transactions, must comply.
PCI DSS
Requirement
10 Track and monitor all access to network resources and cardholder data
• Resource pooling
• Shift from CapEx to OpEx
• Factors like time and efficiency
• Depreciation
• Reduction in maintenance and configuration time
• Shift in business policies
• Utility cost
• Software and licensing cost
• Pay-per-usage
Evaluate Cloud Service Providers
Certification against Criteria
• ISO/IEC 27001:2013
• ISO/IEC 27002:2013
• ISO/IEC 27017:2015
• SOC 1/SOC 2/SOC 3
• NIST SP 800-53
• PCI DSS
ISO 27001:2013
Since September 2013, ISO 27001 has been updated to ISO 27001:2013.
• This international standard provides controls, and implementation guidance for both
cloud service providers and cloud service customers
SOC 1/SOC 2/SOC 3
For years, Statement on Auditing Standards 70 was seen as the de facto standard for data
center customers to obtain independent assurance that their data center service provider
has effective internal controls for managing the design, implementation, and execution of
customer information.
The Statement on Auditing Standards 70 (SAS 70) was replaced by Service Organization
Control (SOC) Type 1 and Type 2 reports in 2011.
Information Technology Security Evaluation
The CC25 is an international set of guidelines and specifications (ISO/IEC 15408) developed
for evaluating information security products, with the view to ensure that they meet an
agreed-upon security standard for government entities and agencies.
• Protection profiles: Define a standard set of security requirements for a specific type of product,
such as a firewall, IDS, or Unified Threat Management (UTM)
• The evaluation assurance levels (EALs): Defines how thoroughly the product is tested. EALs are
rated using a sliding scale from 1–7, with 1 being the lowest level of evaluation and 7 being the
highest
Information Technology Security Evaluation
Federal Information Processing Standard (FIPS) 140 Publication Series was issued by NIST to
coordinate the requirements and standards for cryptography modules covering both
hardware and software components for cloud and traditional computing environments.
FIPS Levels
Security Level 1
• In level 1, the basic cryptographic module requirements are specified for at least
one approved security function or approved algorithm.
Security Level 2
• Level 2 enhances the required physical security mechanisms listed within Level 1.
• Level 2 requires locks that are tamper proof on perimeter and internally prevent
unauthorized physical access to encryption keys.
FIPS Levels
Security Level 3
• Level 3 is developed on the basis of Level 1 and Level 2 to prevent an intruder from gaining access to
information and data, which is held within the cryptographic module.
• In this level, physical security controls are used to detect access attempts. This allows us to respond
appropriately in order to protect the cryptographic module.
FIPS Levels
Security Level 4
Due to competitive pressure, XYZ Corp is hoping to better leverage the economic and scalable
nature of Cloud Computing. These policies have driven XYZ Corp toward the consideration of a
hybrid cloud model that consists of enterprise private and public cloud use.
Although security risk has driven many of the conversations, a risk management approach has
allowed the company to separate its data assets into two segments: sensitive and non-sensitive.
Cloud Transition Scenario
IT governance guidelines must now be applied across the entire cloud platform and
infrastructure security environment. This also affects infrastructure operational options.
XYZ Corp must now apply cloud architectural concepts and design requirements that would best
align with corporate business and security goals.
As a CCSP, you have several issues to address to guide XYZ Corp through its planned transition
to a cloud architecture.
Cloud Transition Scenario
Which cloud deployment model(s) would need to be assessed to select the appropriate ones for the
enterprise architecture?
Based on the choice(s) made, additional issues may become apparent, such as these:
1. Who will the audiences be?
3. How will secure access to the cloud service be enabled, audited, managed, and removed?
4. When and where will access be granted to the cloud and under what constraints (time, location, platform, and
so on)?
Cloud Transition Scenario
Which cloud service model(s) would need to be chosen for the enterprise architecture?
Based on the choice(s) made, additional issues may become apparent, such as these:
1. Who will the audiences be?
3. How will secure access to the cloud service be enabled, audited, managed, and removed?
4. When and where will access be granted to the cloud service and under what constraints (time, location,
platform, and so on)?
Key Takeaways
You are now able to:
Explain the cloud data life cycle based on the Cloud Security Alliance
(CSA) guidance
Share: Data is exchanged among users, customers, and partners in the sharing
phase.
Archive: Data leaves the active status and enters long-term storage in the
archiving phase.
Create
Data created remotely:
Data created by the user should be encrypted before uploading it to the cloud to protect against
obvious vulnerabilities, including man-in-the-middle attacks and insider threats at the cloud data
center.
Data created within the cloud:
Data created within the cloud via remote manipulation should be encrypted upon creation to obviate
unnecessary access or viewing by data center personnel.
The Create phase necessitates all the activities like categorization and classification, labeling,
tagging and marking, and assigning metadata.
Cloud Data Life Cycle
Store
• Controls such as encryption, access policy, monitoring, logging, and backups should be
implemented to avoid data threats.
• Content is vulnerable to attackers if Access Control Lists (ACLs) are not implemented well, files
are not scanned for threats, or files are classified incorrectly.
Cloud Data Life Cycle
Use
• Controls such as Data Loss Prevention (DLP), Information Rights Management (IRM), and data and file
access monitors should be implemented to audit data access and prevent unauthorized access.
• Data in use is most vulnerable, because it might be transported to and processed at insecure locations
such as workstations.
Cloud Data Life Cycle
Share
• Not all data should be shared, and not all the data which is shared should present a threat.
• It becomes difficult to maintain security for shared data which is no longer present at the organization.
• Technologies such as DLP are used to detect unauthorized sharing, and IRM technologies are used to
maintain control over the information.
Cloud Data Life Cycle
Export restrictions
• International Traffic in Arms Regulations (ITAR) enforced by the State Department of the United States
prohibits defense related exports; this includes technical data which is protected by cryptography
systems.
• Export Administration Regulations (EAR) enforced by the Department of Commerce of the United States
prohibits export of dual-use items (technologies that could be used for both commercial and military
purposes) which can be technical data and nonphysical entities.
Cloud Data Life Cycle
Import restrictions
• Cryptography (various): Countries have restrictions on importing cryptosystems or material that has
been encrypted. It is the security professional’s responsibility to know and understand local mandates
while doing business with a nation which has crypto restrictions.
• Wassenaar arrangement: A group of 42-member countries have agreed to mutually inform each other
about conventional military shipments to non-member countries. It is not a treaty, and therefore not
legally binding, but it demands an organization to notify the respective government in order to stay in
compliance.
Cloud Data Life Cycle
Archive
Location:
• Where is the data being stored?
• Which environmental factors pose risk to that location?
• Which jurisdictional aspects are applicable?
• How far is the archive location?
• Is it feasible to access the data during contingency operations, for
instance, during a natural disaster?
• Is it far enough to be safe from events that impact the production
environment but close enough to reach that data during those events?
Cloud Data Life Cycle
Archive
Format:
Format is the data which is stored on a physical medium such as a tape backup
or magnetic storage.
Following are the concerns related to the format and the medium in which it is
stored:
• Is the medium highly portable and in need of additional security controls for
theft?
• Will it still be in the same format which the production hardware can access
when needed?
Cloud Data Life Cycle
Archive
Staff:
• If personnel are not employed by the organization, does the contractor implement a
personnel control suite sufficient for background checks, reliance checks, and monitoring?
Cloud Data Life Cycle
Archive
Procedure:
Destroy
Recently, a software repository company was hacked and bankrupted overnight. The bad guys got
hold of the cloud instance through compromised credentials. When they were discovered, they not
only wiped out all production data to cover their tracks, but they also deleted all of the backup data
bankrupting the company overnight due to the loss of all intangible assets as well as the complete
revocation of any type of trust or reputation the company might have had prior to the breach.
• Question: What was the basic mistake that led to the company losing all of its intangible assets?
• Answer: In this instance, the mistake was placing their cloud backups in the same cloud as their
production data.
Key Data Functions
Process
This function performs a transaction
on the data, updates it, and uses it in
a business processing transaction.
Access Store
This function views or accesses the 1 Data 3 This function stores the
data, including copying, file transfers, Function data in files and databases.
and other exchanges of information.
IaaS Volume
Object
PaaS Structured
Unstructured
Ephemeral storage
Volume storage is a virtual hard drive which is allocated by the cloud provider and is attached to the
virtual host.
Data is stored in volumes, also known as blocks. An arbitrary identifier is assigned to each block by
which it is stored and retrieved.
The operating system sees and interacts with the drive in the same way as it would in the traditional
server model. The drive can be formatted and maintained as a file system in the traditional sense and
utilized as such.
Storage Types
Object:
Structured:
Structured data is an organized and categorized data that can easily be placed within a
database or other storage system with a set of rules and a normalized design.
This data construct allows the application developers to easily import data from other data
sources or nonproduction environments and makes it ready for use in production systems.
The data is typically organized and optimized for searching technologies so that it can be used
without the need for customization or tweaking.
Storage Types
Unstructured:
Unstructured data is an information that cannot be easily used in a rigid and formatted database data
structure. This can be due to the type or size of the files.
Files included in this category are multimedia files (videos and audio), photos, and files produced by
word processing and Microsoft Office products, website files, or anything else that will not fit within a
database structure.
Storage Types
This is the classic form of storing and managing the data within databases. This data is used and
maintained by applications.
Data is either generated by the application or imported via the application through interfaces and
loaded into the database.
Storage Types
The files and content that are held by the application in another means of storage can be made accessible
to the users.
Storage Types
Ephemeral storage:
This type of storage is relevant for IaaS instances and exists only as long as its instance is up.
It is typically used for swap-files and other temporary storage needs and is terminated with its instance.
Storage Types
A content delivery network (CDN) is a form of data caching, usually a geographically distributed network of
proxy servers which provide copies of data commonly requested by users.
Content is stored in object storage, which is then distributed to multiple geographically distributed nodes in
order to improve internet consumption speed.
Threats to Storage Types
1 2 3 4
5 6 7
Corruption, Theft or
modification, Data leakage accidental loss
and and breaches of media
destruction
8 9
Improper
treatment
or sanitization Malware attack
after use
Real-World Scenario: Password Storage
In June 2012, Last.fm, a music-centered social media platform, admitted to a data breach and
advised all users to change their passwords after hackers posted Last.fm password hashes to a
password cracking forum.
Data breach notification service LeakedSource, obtained the data of more than 43 million user
accounts, including weakly encrypted passwords, 96 percent of which were cracked within two
hours.
Since then Last.fm has made improvements to how the passwords are stored after admitting
that they have been using the MD5 algorithm with no salt.
01 02 03
• Encryption will be used for data which moves in and out of the cloud for processing, archiving, or sharing.
Techniques such as SSL/TLS or VPN are used to avoid information exposure or data leakage while in motion.
• It is used for protecting data at rest such as file storage, database information, application components,
archiving, and backup applications.
• This is done for files or objects that must be protected when stored, used, or shared in the cloud.
• It is useful when complying with regulations such as HIPAA and PCI DSS, which in turn require relevant protection
of data traversing untrusted networks and protection of certain data types.
• Encryption provides protection from third-party access via subpoena or lawful interception.
• It is used for creating enhanced mechanisms for logical separation between different customers’ data in the
cloud.
• It helps in logical destruction of data when physical destruction is not feasible or technically impossible.
Encryption Challenges
• The integrity of encryption is heavily dependent on control and management of the relevant encryption
keys, including how they are secured.
• It is challenging to implement encryption effectively when a CSP is required to process the encrypted data.
• Encryption and key management becomes challenging as the data in the cloud is highly portable, that is, it
replicates, is copied, and is backed up extensively.
• Multitenant cloud environments and the shared use of physical hardware pose challenges for the
safeguarding of keys in volatile memory such as random access memory (RAM) caches.
• Secure hardware for encrypting keys may not exist in cloud environments. Software-based key storage are
often more vulnerable.
Encryption Challenges
• The nature of cloud environments typically requires you to manage more keys than
traditional environments (access keys, API keys, encryption keys, and shared keys).
• Encryption does not solve data integrity threats. Data can be encrypted and yet be
subject to tampering or file replacement attacks. In this case, supplementary
cryptographic controls such as digital signatures need to be applied, along with
nonrepudiation for transaction-based activities.
Data Encryption in IaaS
• The engine encrypts the data which is written to the storage and decrypts it while exiting the storage.
• The encryption engine is located on the storage management level, with the keys usually held by the
cloud service provider (CSP).
• The encryption of data in IaaS provides protection from hardware theft or loss.
• The data encryption in IaaS does not protect from CSP administrator access or any unauthorized
access coming from the layers above the storage.
Data Encryption in IaaS
Volume storage encryption encrypts the data which resides on volume storage typically
through an encrypted container, which is mapped as a folder or volume.
Instance based encryption allows access to data only through the volume OS and therefore
provides protection against the following:
File-level encryption:
• The encryption engine is commonly implemented at the client side and preserves the format
of the original file.
• Examples for file-level encryption include Information Rights Management (IRM) and Digital
Rights Management (DRM) solutions.
Data Encryption in IaaS
Application-level encryption:
• The encryption engine resides in the application that is utilizing the object storage.
• It can be integrated into the application component or by a proxy that is responsible for
encrypting the data before going to the cloud.
Database Encryption
File-level encryption:
• It encrypts the volume or folder of the database with the help of encryption engine and its
keys reside on the instances attached to the volume.
• External file system encryption protects the data from media theft, lost backups, and
external attacks but does not protect against attacks with access to the application layer, the
instance OS, or the database itself.
Database Encryption
Application-level encryption:
It encrypts the data with the help of encryption engine which resides in the application that is utilizing
the database.
Database Encryption
Transparent encryption:
• Transparent encryption is capable of encrypting the entire database or specific portions, such as
tables.
• The encryption engine resides within the database and is transparent to the application.
• Transparent encryption keys usually reside within the instance. Their processing and management
can also be offloaded to an external Key Management Service (KMS).
• Transparent encryption provides effective protection from media theft, backup system intrusions,
and certain database and application-level attacks.
Key Management and Common Challenges
Leading practices coupled with regulatory requirements may set specific criteria for key access, along with
restricting or not permitting access to keys by CSP employees or personnel.
Key storage:
Secure storage for the keys is essential to safeguard the data. In traditional in-house environments, keys
were able to be stored in secure dedicated hardware. This may not always be possible in cloud
environments.
The nature of the cloud results in data backups and replication across a number of different formats. This
can affect the ability for long- and short-term key management.
Key Management Considerations
While generating keys, use of random number generator should be considered as a trusted process.
During the lifecycle, cryptographic keys should never be transmitted in an untrusted environment; they should
always remain in a trusted environment.
While considering key escrow or key management “as a service,” we must carefully plan to take into account all
relevant laws, regulations, and jurisdictional requirements.
While discussing confidentiality threats versus availability threats, we must consider that lack of access to the
encryption keys will result in lack of access to the data.
Wherever its possible, the key management functions should be conducted separately from the CSP in order to
enforce separation of duties and force collusion to occur if unauthorized data access is attempted.
Key Storage in the Cloud
• The keys are stored on • The keys are • Trusted third party
the virtual machine or maintained separately provides key escrow
application component from the encryption service
that is also acting as the engine and data • Key management
encryption engine • Can be on the same providers use
• Typically used in cloud platform, specifically developed
storage-level internally within the secure infrastructure
encryption, internal organization, or on a and integration
database encryption, different cloud services for key
or backup application management
encryption
• Helpful for mitigating
against the risks
associated with lost
media
Keys
KEK themselves shouldn't be stored in the clear but are encrypted with a KEK (Key
Encrypting Key).
The DEK and the KEK must be stored on separate physical systems so that if one is
compromised, the other is not.
Data Security Strategies
Data masking:
Data masking or obfuscation, is the process of hiding, replacing, or omitting sensitive information from a
specific data set.
Data Security Strategies
Random substitution: The idea of replacing (or appending) the value with a
random value is called random substitution.
Data anonymization
Data anonymization is a type of information sanitization and its intent is privacy protection.
Data generally has direct and indirect identifiers, where direct identifiers represent private data, whereas
indirect identifiers have attributes such as demographic and location data. When used together, they could
produce the exact identity of an individual.
Data anonymization is the process of removing the indirect identifiers either by encrypting or removing
personally identifiable information from data sets, so that the people whom the data describes remain
anonymous.
Data Security Strategies
Tokenization
Tokenization is the process of substituting a sensitive data element with a nonsensitive equivalent, referred
to as a token.
Tokenization is the practice of having two distinct databases; one with the live and actual sensitive data and
another with nonrepresentational tokens mapped to each piece of that data.
The token is usually a collection of random values with the shape and form of the original data placeholder
which can be mapped back to the original data by the tokenization application or solution.
Data Security Strategies
Tokenization
• Mitigating risks of storing sensitive data and reducing attack vectors on that data
Data Security Strategies
Sensitive data
Tokenization steps: Token
Token database
• An application collects or generates a piece of sensitive data.
3
• The data is not stored locally and is sent to the tokenization server.
Tokenization Authorized
• The tokenization server generates the token. The sensitive data and Server
6
Application
Homomorphic encryption
Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext,
thus generating an encrypted result which on decryption matches the result of operations performed on the
plaintext.
Note: Homomorphic encryption is a developing area and does not represent a mature offering for most use
cases.
Data Security Strategies
Bit splitting
Bit splitting involves encrypting data, then splitting the encrypted data into smaller data units and distributing
those smaller units to different storage locations, and then further encrypting the data at its new location.
With this process, the data is protected from security breaches, because even if an intruder is able to retrieve
and decrypt one data unit, the information would be useless unless it can be combined with decrypted data
units from the other locations.
Data Security Strategies
Bit splitting
Benefits:
• Data security is enhanced due to the use of stronger confidentiality mechanisms.
• Bit splitting between different geographies and jurisdictions make it hard to gain access to the complete data
set via a subpoena or other legal processes.
• It can be scalable, can be incorporated into secured cloud storage API technologies, and can reduce the risk
of vendor lock-in.
Data Security Strategies
Bit splitting:
Challenges:
• Processing and reprocessing the information to encrypt and decrypt the bits is a CPU intensive
activity.
• The whole data set may not be required to be used within the same geographies which is stored and
processed by CSP, which in turn leads to the need of ensuring data security on the wire as a part of
the security architecture for the system.
• Storage requirements and costs are usually higher with a bit splitting system.
• Bit splitting can generate availability risks because all parts of the data may not be available while
decrypting the information.
Real-World Scenario
During an audit of a regulated company who had utilized most of the encryption techniques as part
of their PCI compliance efforts, a surprising finding was discovered.
It was found that customer representatives, who could not see Social Security numbers due to
masking, were in fact exposed to full Social Security numbers during phone call conversations with
customers. The customers, even though never asked, would sometimes just blurt out their entire
Social Security number.
What no one was aware of was that the annoying message you get at the start of customer service calls
that says, “This call may be recorded for training purposes” resulted in those blurted-out messages being
recorded.
The messages that were being recorded were also being stored in the cloud and were not encrypted.
PCI only has standards for encrypting cardholder data, and no one ever suspected that the voice
messages that were recorded might expose such data.
DLP describes the controls which are put in place by an organization to ensure that certain types of data
(structured and unstructured) remain under organizational controls, in line with policies, standards, and
procedures.
Data Security Strategies
DLP Data at rest This is sometimes referred to as storage-based data. In this topology, the DLP engine is
architect (DAR) installed where the data is at rest and usually have one or more storage subsystems, as
ure
well as file and application servers.
Data in use This is sometimes referred to as client or endpoint-based. The DLP application is installed on a
(DIU) user’s workstation and endpoint devices.
Data Security Strategies
Leading practices:
Leading practices start with the data discovery and classification process. Data discovery and classification
processes are mature in the cloud deployments and add value to the data security process.
A well-known, nationally recognized Cancer Research and Treatment Center in the United States was initially
seeking to update and align its information security posture to comply with healthcare regulations and
policies that safeguard patient privacy and data. Like many healthcare institutions, the Cancer Center had
made significant investments in cybersecurity to protect its perimeter and was committed to improving
security in the wake of growing threats from both inside and outside the organization.
Because of the “open” nature of the affiliated university’s network, the Cancer Center’s security management
team decided to invest in DLP technologies for safeguarding both structured and unstructured data which
included standard patient data and intellectual property in the form of genomic research.
Within the first few days of implementing the DLP solution, the client’s CIT identified and
flagged the behavior of one specific doctor involved in genomic research at the Center’s campus.
Whether by accident or malicious intent, it appeared the doctor was transferring proprietary
research information to a university outside the United States; a clear violation of policy.
Real-World Scenario: DLP Solutions
Within the first few days of implementing the DLP solution, the client’s CIT identified and
flagged the behavior of one specific doctor involved in genomic research at the
Center’s campus. Whether by accident or malicious intent, it appeared the doctor
was transferring proprietary research information to a university outside the
United States; a clear violation of policy.
Data discovery:
It is a business intelligence operation and an interactive user-driven process where data is visually
represented and is analyzed to look for patterns or specific attributes rather than static reporting.
Data discovery enables people to use intuition to find meaningful and important information in data.
It is an iterative process, where initial findings refine the parameters and representation in order to dive
deeper into the data and continue to scope it toward the desired objective.
Data Discovery
Big data: On big data projects, data discovery is important and challenging. Many traditional methods of data
discovery fail when it comes to big data, as the volume of data meant for discovery is large and the diversity of
sources and formats also presents many challenges. Cases in which big data initiatives involve rapid profiling
of high-velocity big data make data profiling harder and less feasible using existing toolsets.
Real-time analytics: The ongoing shift toward real-time analytics has created a new class of use cases for
data discovery. These use cases are valuable but require data discovery tools that are faster, more automated,
and more adaptive.
Agile analytics and agile business intelligence: Data scientists and business intelligence teams are adopting
more agile, iterative methods of turning data into business value. They perform data discovery processes
more often and in more diverse ways, for example, when profiling new data sets for integration, seeking
answers to new questions emerging this week based on last week’s new analysis, or finding alerts about
emerging trends that may warrant new analysis work streams.
Data Discovery
Metadata: Metadata provides information about the data. All relational databases store metadata that
describes tables and column attributes.
Labels: This is marked by data elements which are grouped with a tag that describes the data. This can
be done at the time the data is created, or tags can be added over time to provide additional information
and references to describe the data.
Content analysis: In this form of analysis, the data itself is analyzed by employing pattern matching,
hashing, statistical, lexical, or other forms of probability analysis.
Example: Luhn check to verify Credit Card number.
Data Discovery
Identifying where your data is: Not knowing where the data is, where it is going, and where it will be at
any given moment with assurance presents significant security concerns for enterprise data and the CIA
that is required to be provided by the CCSP.
Accessing the data: Not all data stored in the cloud can be accessed easily. Sometimes customers do not
have the necessary administrative rights to access their data on demand. Long-term data can be visible to
the customer but is not accessible to download in acceptable formats for use offline.
Performing preservation and maintenance: Long-term preservation of data is possible and can be
managed via an SLA with a provider. However, the issues of data granularity, access, and visibility need to
be considered when planning for data discovery against long-term stored data sets.
Data Classification
• Data classification is a process of analyzing data for certain attributes, and then using that to
determine the appropriate policies and controls to apply to ensure its security.
• Data classification is the responsibility of the data owner and takes place in the create phase.
• A data classification process is recommended for implementing data controls such as DLP
and encryption.
• Data classification is also a requirement of certain regulations and standards, such as ISO
27001 and PCI DSS.
Data Classification
Sensitivity: Data is assigned a classification according to the sensitivity of the data, based on the negative
impact an unauthorized disclosure would cause. This classification model is used by the military.
Jurisdiction: Data jurisdiction considers the geophysical location of the source or storage point of the
data might have significant bearing on how that data is treated and handled.
For instance, Personally Identifiable Information (PII) data gathered from citizens of the European Union
(EU) is subject to the EU privacy laws, which are much stricter and comprehensive than privacy laws in the
United States.
Criticality: Data that is deemed critical to organizational survival might be classified in a manner distinct
from trivial, basic operational data.
BIA can help determine which material would be classified this way.
Challenges with Cloud Data
Challenges
The CCSP needs to ensure that proper security controls are in place so that whoever creates or
Data creation
modifies data must classify or update the data as part of the creation or modification process.
Classification Controls can be administrative (as guidelines for users who are creating the data), preventive, or
controls compensating.
Classifications can be based on the metadata that is attached to the file, such as owner or
Metadata location. This metadata should be accessible to the classification process to make the proper
decisions.
Classification data Controls should be placed to make sure that the relevant property or metadata can survive data
transformation object format changes and cloud imports and exports.
Cloud applications must support a reclassification process based on the data life cycle.
Reclassification Sometimes the new classification of a data object may mean enabling new controls such as
consideration encryption or retention and disposal. For example, customer records moving from the
marketing department to the loan department.
Jurisdictional Data Protections for Personally Identifiable Information (PII)
Data Privacy Acts
28 member states (countries) comprise the EU, with that number dropping to
27 in 2017–2018 when the UK formalizes leaving the union. The EU treats PII
The EU as a human right, with severely stringent protections for individuals.
The EU General Data Protection Regulation (GDPR) is a regulation that requires businesses to protect the
personal data and privacy of EU citizens for transactions that occur within EU member states and exportation
outside the EU.
Companies that collect data on citizens in European Union (EU) countries will need to comply with strict new
rules around protecting customer data by May 25, 2018.
Non-compliant organizations may face administrative fines of up to €20 million or up to 4% of the entity’s
global turnover of the preceding financial year, whichever is higher.
Under existing “right to be forgotten” provisions, people who don’t want certain data about them online can
request companies to remove it.
Data Privacy Acts
A Data Controller is the legal entity “who either alone, or jointly, determines the
purpose for and manner in which personal data is, or will be, processed.”
A data controller could either be an organization or an individual that collects and
processes information about customers, patients, etc.
A Data Processor processes data on behalf of the data controller. But, it does not
GDPR: Roles and control the data and cannot change the purpose or use of the particular set of
data.
Responsibilities
Data processors could include organizations such as payroll firms, cloud service
vendors, and data analytics providers. Data Processors reports to Data
Controllers, they are all directly accountable for data protection under GDPR.
A Supervisory Authority (SA) established in each EU Member State, has been tasked to
enforce GDPR and monitor the application of GDPR rules to protect individual rights
with respect to the processing and transfer of personal data within the EU.
Data Privacy Acts
The EU General Data Protection Regulation (GDPR) outlines six data protection principles that
organizations need to follow when collecting, processing, and storing individuals’ personal data.
Lawfulness,
Purpose Data Storage Integrity and
fairness, and Accuracy
limitation minimization limitations confidentiality
transparency
The data controller is responsible for complying with the principles and must be able to
demonstrate the organization's compliance practices.
Data Privacy Acts
Purpose limitation:
“Personal data shall be collected for specified, explicit, and legitimate purposes and not further
processed in a manner that is incompatible with those purposes.”
Data minimization:
“Personal data shall be adequate, relevant and limited to what is necessary in relation to the
purposes for which they are processed.”
Data Privacy Acts
Accuracy
“Personal data shall be accurate and where necessary, kept up-to-date; every reasonable step
must be taken to ensure that personal data that are inaccurate, having regard to the purposes
for which they are processed, are erased or rectified without delay.”
Storage limitations
“Personal data shall be kept in a form which permits identification of data subjects for no longer
than is necessary for the purposes for which the personal data are processed”.
United States
GLBA (The Gramm-Leach-Bliley Act): It is also known as the U.S. Financial Modernization Act which regulates
the protection of consumer personal information held by financial institutions.
This act demands financial institutions for the companies which offer consumers financial products or services
like loans, financial or investment advice, or insurance. These financial institutions regulate the flow of
information and practices to their customers and safeguards their sensitive data.
Financial privacy rule: It regulates the collection and disclosure of private financial information.
Safeguards rule: It stipulates that financial institutions must implement security programs to protect such
information.
Pretexting provision: It prohibits the practice of pretexting (accessing private information using false
pretenses).
Data Privacy Acts
The primary goal of the law is to make it easier for people to keep health insurance, protect the
confidentiality and security of healthcare information, and help the healthcare industry control
administrative costs.
The HIPAA security rule requires appropriate administrative, physical, and technical safeguards to ensure
the confidentiality, integrity, and security of Protected Health Information (PHI).
HIPAA mandates steep federal penalties for noncompliance.
A supplemental act was passed in 2009 called The Health Information Technology for Economic and Clinical
Health (HITECH) Act which provides financial incentives for medical practices and hospitals to convert paper
record keeping systems to digital.
Data Privacy Acts
This act requires program officials and the head of each agency to conduct annual reviews of information
security programs, with the intent of keeping risks at or below specified acceptable levels in a cost-effective,
timely, and efficient manner.
According to FISMA, the term “information security” means protecting information and information systems
from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide integrity,
confidentiality, and availability.
Data Privacy Acts
The Sarbanes-Oxley Act (SOX) requires all publicly held companies to establish internal controls and
procedures for financial reporting to reduce the possibility of corporate fraud.
Penalties for noncompliance: Formal penalties for noncompliance with SOX can include fines, removal from
listings on public stock exchanges, and invalidation of D&O insurance policies. Under the Act, CEOs and CFOs
who willfully submit an incorrect certification to a SOX compliance audit can face fines of $5 million and up to
20 years in jail.
Responsibilities of Cloud Services
Data Security
Application Security
Platform Security
Infrastructure Security
Physical Security
Data Rights Management
Data Rights Management
Digital Rights Management (DRM): This applies to the protection of consumer media, such as music,
publications, videos, and movies.
DRM is most typically used to protect the intellectual property of a vendor’s digital product that is
electronically sold into a wide market, such as music or film. If someone buys a music file online, for example,
DRM built into the servers and players allows the licensor to control how the file is used. The licensor may
specify electronically that a music file can’t be forwarded to others or copied, or that a video file may be
watched for only a certain length of time.
Information Rights Management (IRM): This applies to the organizational side to protect information and
privacy, whereas DRM applies to the distribution side to protect intellectual property rights and control the
extent of distribution.
Information Rights Management
Features:
• IRM allows one to set policies on who can open the document and what they can do with it. IRM provides
granularity that flows down to printing, copying, saving, and similar options.
• IRM contains ACLs and is embedded into the original file. Due to this, the IRM is agnostic to the location
of the data unlike other preventive controls which depend on file location.
• IRM helps in providing protection and this protection travels with the file so that the information remains
protected in secured and unsecured network.
• IRM is useful for protecting sensitive organization content such as financial documents. IRM can also be
implemented to protect emails, web pages, database columns, and other data objects.
• IRM is useful for setting up a baseline for the default Information Protection Policy.
Information Rights Management
Tools:
Auditing: It allows robust auditing of who has viewed information, as well as provide proof as to when and where
they accessed the file.
Expiration: IRM technologies allow for the expiration of access to data. This gives an organization the ability to set
a lifetime for the data and enforce policy controls that disallow it to be accessible forever.
Policy control: It allows an organization to have very granular and detailed control over how their data is
accessed and used. The ability to control, even with different audiences, who can copy, save, print, forward, or
access any data is far more powerful than what is affordable by traditional data security mechanisms.
Protection: With the implementation of IRM technologies and controls, any information under their protection is
secure at all times.
Support: Most IRM technologies support a range of data formats and integration with application packages
commonly used within organizations, such as email and various office suites.
Data Retention, Deletion, and Archiving Policies
Data Protection Policies
• Data retention
• Data deletion
• Data archiving
Data Protection Policies
Data retention:
Data retention involves the storing and maintaining of data for a period of time as well as the methods used
to accomplish this.
• Retention periods
• Data classification
• Data formats
• Data security
Data deletion:
When data is no longer needed in a system, it must be removed in a secure way, so that it can no longer be
accessible or recoverable in the future.
Within a cloud environment, the deletion methods available to the customer are overwriting and
cryptographic erasure (also known as cryptographic shredding).
• Regulation or legislation: Certain laws and regulations require specific degrees of safe disposal for certain
records.
• Business and technical requirements: A business policy may require safe disposal of data. Also,
processes such as encryption might require safe disposal of the clear text data after creating the encrypted
copy.
Disposal Options
Using strong magnets for scrambling data on magnetic media such as hard drive
Degaussing
and tapes.
In 2013, Affinity Health Plan, a managed care plan company based in New York, agreed to pay federal
regulators $1.2 million to settle a 2010 incident that affected 344,557 individuals whose data was
discovered on the hard drives of copy machines that had been returned to a leasing company.
Affinity discovered the breach after it was informed by a representative of CBS Evening News that, as
part of an investigatory story, CBS had purchased four copy machines from a company that had
leased them to four different organizations, including Affinity. CBS had hired a firm to analyze what
was on their hard drives, discovering that the machine that Affinity had used contained confidential
medical information.
The investigation revealed that Affinity failed to incorporate the ePHI stored on photocopier
hard drives in its analysis of risks and vulnerabilities as required under the HIPAA Security Rule,
and failed to implement policies and procedures when returning the photocopiers to its leasing
agents.
A corrective action plan required Affinity to make an effort to retrieve all hard drives from
leased photocopiers and take measures to safeguard ePHI.
Real-World Scenario: Data Remanence
Data archiving:
Data archiving is the process of identifying and moving inactive data out of current production systems
and into specialized long-term archival storage systems.
In 2017, Verizon, a major telecommunications provider, suffered a data security breach with
over 14 million US customers' personal details exposed on the Internet after NICE Systems, a
third-party vendor, mistakenly left the sensitive users’ details open on a server.
Nice Systems (a Verizon partner) logged customer files that contained sensitive and personal
information (including customer names, corresponding cell phone numbers, and specific
account PINs) on an Amazon S3 bucket. For reasons unknown, that bucket was left unsecured,
thus exposing more than 14 million Verizon customer records to anyone who discovered the
bucket.
•Question: Between Verizon, NICE Systems, and Amazon, who is accountable for the loss of data?
•Answer: Verizon. They should ensure visibility into how partners and other stakeholders keep their
data secure.
Data Protection Policies
Legal Hold
A legal hold (also known as a litigation hold) is a process that an organization uses to preserve
electronically stored information (ESI) or paper documents that may be relevant to a new or imminent
legal case. It is intended to prevent deletion or modification of potentially relevant evidence, so as to
ensure that evidence, when needed, will be available.
Failure to adequately preserve their data or organize the proper litigation hold can expose an
organization to legal and financial risks, such as scrutiny of the organization's records retention and
discovery processes, adverse legal judgments, sanctions, or fines.
Auditability, Traceability, and Accountability of Data Events
Event Sources
IaaS event sources: A PaaS environment does not offer or expose the same
level of customer access to infrastructure and system
PaaS event sources:
logs as the IaaS environment, but the same detail of logs
SaaS event sources:
and events is available at the application level.
Event Sources
IaaS event sources: Given the nature of a SaaS environment, the amount
PaaS event sources: of log data that is typically available to the cloud
SaaS event sources: customer is minimal and highly restricted.
Security Information and Event Management (SIEM)
Security Event Management (SEM) provides real-time monitoring, correlation of events, notifications, and
console views.
Security Information Management (SIM) provides long-term storage, analysis, and reporting of log data.
Security Information and Event Management (SIEM) technology provides real-time analysis of security
alerts generated by network hardware and applications.
SIEM is sold as software, appliances, or managed services, and is used to log security data and generate
reports for compliance purposes.
Security
Security Event
Information SIEM
Management (SEM)
Management (SIM)
Security Information and Event Management (SIEM)
Data aggregation: Log management aggregates data from many sources, including network, security,
servers, databases, and applications, providing the ability to consolidate monitored data to help avoid
missing crucial events.
Correlation: This involves looking for common attributes and linking events into meaningful bundles.
This technology provides the ability to perform a variety of correlation techniques to integrate
different sources to turn data into useful information. Correlation is typically a function of the SEM
portion of a full SIEM solution.
Alerting: This is the automated analysis of correlated events and production of alerts to notify
recipients of immediate issues. Alerting can be to a dashboard or via third-party channels such as
email.
Dashboards: Tools can take event data and turn it into informational charts to assist in seeing
patterns or identifying activity that is not forming a standard pattern.
Security Information and Event Management (SIEM)
Compliance: Applications can be employed to automate the gathering of compliance data, producing
reports that adapt to existing security, governance, and auditing processes.
Retention: This involves employing long-term storage of historical data to facilitate correlation of data
over time and to provide the retention necessary for compliance requirements. Long-term log data
retention is critical in forensic investigations because it is unlikely that discovery of a network breach will
coincide with the breach occurring.
Forensic analysis: This is the ability to search across logs on different nodes and time periods based on
specific criteria. It mitigates having to aggregate log information in your head or having to search through
thousands and thousands of logs.
Chain of Custody
Nonrepudiation is the ability to confirm the origin or authenticity of data to a high degree of certainty. This
typically is done through digital signatures and hashing, to ensure that data has not been modified from its
original form. This concept plays directly into and complements chain of custody for ensuring the validity and
integrity of data.
Real-World Scenario: Hacking of Dropbox
In 2012, Dropbox was hacked, with over 68 million users’ email addresses and passwords leaking on to the
internet.
A Dropbox employee's personal password had been used on both their LinkedIn and their corporate Dropbox
account. The LinkedIn password was obtained via another breach and this was reused to infiltrate the Dropbox
network and eventually steal the files containing the credentials.
Fortunately, Dropbox used bcrypt hashing algorithm to protect the password which is very resilient to cracking.
Dropbox completed password reset for all those users who signed up for Dropbox prior to mid-2012 and hadn’t
changed their password since. They also encouraged users to enable two-step verification.
•Question: What steps did Dropbox adopt to prevent such data breach?
•Answer: Dropbox has taken steps to ensure that its employees don’t reuse passwords on their corporate
accounts.
Key Takeaways
Cloud infrastructure consists of data centers and hardware used for its functioning
Management Plane
Virtualization Software
Networking
Backup Power
Redundancy
Multiple Power
Distribution
Units (PDUs)
Cloud Redundancy
Software-Defined networking (SDN) allows network administrators to programmatically initialize, control, change, and
manage network behavior dynamically via open interfaces and abstraction of lower-level functionality.
This is done by decoupling or disassociating the system that makes decisions about where traffic is sent from the
underlying systems that forward traffic to the selected destination.
Management Plane
Net App Net App Net App
API API
Control
Plane
SDN Control Software
Data Plane
Management Plane
Outside
World
Storage
• Allows the administrator to remotely manage any Compute Volume
or all of the hosts. Controller Controller
Compute Pool
Management Plane
VM VM VM
Hypervisor Hypervisor
Virtualization Jurisdiction
Multi-tenant network
Natural disasters
Physical security
APIs
Buy, Build, and Share
Buy or This is a cheaper alternative and may include limitations on design inputs.
lease
Share The physical separation of servers and equipment must be included in the design.
Data Center Design Standards
The Uptime Institute, Inc. publishes widely known standards on data center tiers and topologies.
It is based on a series of four tiers. Progressive increase in number makes each tier more stringent,
reliable, and redundant systems for security, connectivity, fault tolerance, redundancy, and cooling.
Tier 2
The minimum requirements for a data center are:
Tier 3 • Dedicated space for IT systems
• Uninterruptible Power Supply (UPS) system for line conditioning and backup purposes
Tier 4
• Sufficient cooling system for all critical equipments
• Efficient power generator for extended electrical outages
Data Center Design Standards
Tier 4 • Annual maintenance is necessary to safely operate the data center and requires full
shutdown (including critical systems). Without this maintenance, the data center is
likely to suffer increased outages and disruptions.
Data Center Design Standards
Tier 1
A Tier 2 data center is slightly more robust than Tier 1.
Tier 2
Features:
Tier 3
• Critical operations do not have to be interrupted for scheduled replacement and
Tier 4 maintenance of any of the redundant components.
• Unplanned failures of components or systems result in downtime.
Data Center Design Standards
The Tier 3 data center features both the redundant capacity components of a Tier 2 build and
Tier 1 the added benefit of multiple distribution paths.
Tier 2
Characteristics that differentiate Tier 3 from the prior levels include the following:
Tier 3 • There are dual power supplies for all IT systems.
Tier 4 • The critical operations can continue even if any single component or power element is out of
service.
• Unplanned loss of a component or single system may cause downtime; the loss of a single
system, on the other hand, will cause downtime
Distinction: A component is single node in a multiple node system; while each system will
have a redundant component, not all systems are redundant.
Data Center Design Standards
Fault-tolerant site infrastructure:
Every element and system of the facility has integral redundancy such that critical operations
can survive both planned and unplanned downtime.
Tier 1
In addition to all Tier 3 features, the Tier 4 data center will include these attributes:
Tier 2
• There is redundancy where multiple components are independent and physically separate
Tier 3 from each other.
• There is availability of sufficient power and cooling for critical operations even after the
Tier 4
loss of any facility infrastructure element.
• The loss of a single system, component, or distribution element will not affect critical
operations.
• The automatic response capabilities will not let critical operations to halt due to
infrastructure failures.
• Scheduled maintenance can be performed without affecting critical operations.
Data Center Design Standards
Compartmentalization No No No Yes
Continuous cooling No No No Yes
Real-World Scenario: Tier Type
Verne Global owns and operates a 44-acre data center campus in Keflavik, Iceland. As a strategic
location between the world’s two largest data center markets, Europe and North America, Verne
Global is addressing two key issues facing today’s data revolution, power pricing and availability.
The facility does not use water cooling or mechanical cooling equipment, such as compressors.
Instead, it uses power from Iceland’s renewable energy sources and free air cooling technology to
minimize carbon emissions.
Real-World Scenario: Tier Type
The data center is the world’s first dual-sourced, 100% renewably powered data center, according to Verne
Global, as it uses Iceland’s natural geothermal and hydroelectric power.
Answer: Tier 4
Environmental Design Considerations
Environmental Design
Note: Must employ positive air pressure and drainage inside the data
center
Cable Management
• Redundant connectivity from multiple providers to the data center prevents a single point of
failure for network connectivity
• Cabling and connectivity backed by a reputable vendor with guaranteed error-free performance
avoids poor transmission in the data center
Hypervisor
The Hypervisor
Policy and
Cloud-Specific Risks
Organization Risks
Non-Cloud Specific
Virtualization Risks
Risks
Risks Associated with Cloud Infrastructure
Policy and
Organization Risks The consolidation of IT infrastructure leads to consolidation risks,
where a single point of failure can have a bigger impact.
General Risks
Legal Risks
• The data of multiple customers may be
Law Enforcement exposed as it may be required for law
Non-Cloud Specific enforcement or civil legal authorities.
Risks
Other Malicious
Cloud-Specific Risks Unauthorized Network Attacks:
Or Non-Malicious
Facility Access Consumer and
Actions
Provider Side
Legal Risks
Natural Social Default
Disasters Engineering Passwords
Non-Cloud Specific
Risks
Cloud Attack Vectors
Cloud Attack Vectors
• Guest breakout
• Identity compromise, either technical or social (for example, through employees of the provider)
• Attacks on the provider’s infrastructure and facilities (for example, from a third-party administrator
A compensating control, also called an alternative control, is a mechanism that is put in place to
satisfy the requirement for a security measure that is deemed too difficult or impractical to implement
at the present time.
Compensating Controls
Compensating Controls
Every compensating control must meet four criteria:
• Be commensurate with the additional risk imposed by not adhering to the requirement
Business Scenario: Security Control
Business Scenario
PCI DSS mandates periodic cryptographic key changes. The objective is to minimize the risk of
someone discovering the keys.
Company XYZ’s initial encryption of card holder’s data in the ABC application database took 16
months to complete. To change keys, it will take at least 16 months to decrypt the database and
another 16 months to encrypt it.
To meet the security requirement, company XYZ requires the Key Encryption Key (KEK) to be
changed annually. The KEK is retrieved from a separate system each time the ABC application is
initiated. The KEK is retained in active memory as long as the ABC application is up and running.
Question: What type of security control is used to meet the security requirement?
Cloud provider is responsible for the underlying hardware and network. The remaining services and
security responsibilities either lie with the customer or are split between the customer and cloud
provider.
• Data at rest: The main protection is through the use of encryption technologies.
• Data in transit: The main methods of protection are network isolation and the use of encrypted
transport mechanisms.
• Data in use: Protection through secure API calls and web services via the use of encryption, digital
signatures, or dedicated network pathways.
Virtualization Systems Protection
Outside
World
During the 2017 Pwn2Own, an annual hacking contest in Vancouver, a hacker compromised
Microsoft's heavily fortified Edge browser that fetched a prize of $105,000.
The hacker used a JavaScript engine bug to achieve the code execution inside the Edge sandbox,
and used a Windows 10 kernel bug to escape and fully compromise the guest machine.
Allows a business to plan what it needs to Allows a business to plan what needs to be
do to ensure that its key products and done immediately after a disaster to
services are not affected in case of a recover from it.
disaster.
On-Premises Cloud as BCDR
The cloud serves as the endpoint for failover services and BCDR activities
Disaster Recovery
When one region or availability zone fails, the service is restored to another part of that same cloud
Disaster Recovery
Cloud Provider
Cloud Provider
Cloud Service Consumer and Alternative BCDR Provider
Cloud Service Consumer, Alternative Provider BCDR
When a region or availability zone fails, the service is restored to a different cloud
Disaster Recovery
Paris London
Real-World Scenario: Ransomware Attack
Real World Scenario
In October 2016, Northern Lincolnshire and Goole NHS Foundation Trust experienced a cyberattack
from a variant of ransomware called Globe2 which infects users via phishing emails.
To prevent the virus from spreading, the trust shut most of its systems for four days, resulting in
2,800 patients appointment cancellations.
Investigation revealed that the cause for the attack was a misconfiguration in the firewall. The trust
agreed to conduct penetration testing and gauging staff's cybersecurity awareness.
Question: Why did the trust cancel appointments after the cyber attack?
Answer: The trust did not have a business continuity plan in place to maintain
services.
BCDR Planning Factors
• Actual and potential location of workforce and business partners in relation to the disaster event
Disruptive Events
Supply
System Power distribution outages, communication interruptions, etc.
Equipment
Failures Hardware failure, network failure, utility disruptions, etc.
Relevant Cloud Infrastructure Characteristics
Cloud infrastructure has a number of characteristics that can be distinct advantages in realizing BCDR
DC2
Planning, Preparing, and Provisioning Location DC1 Zone 1
DC2
• Power or network failure can be mitigated in a DC1 Zone 2
different zone in the same data center.
Production
• Flood, fire, and earthquakes direct the facilities to BCDR Site
Site
be set up at remote locations. Paris
Active/Passive Full Replica
Data Replication
• Data can be replicated at the block level, the file level, Traffic After Failover
If original
Clean up any
Return to provider no
resources that
normal is Back to the longer viable Document any
are no longer
where DR original then, DR lessons
needed,
ends provider provider learned
including
becomes the
sensitive data
“new normal”
Real-World Scenario
On the night of September 16, 2013, lightning struck Cantey Technology, an IT company that hosts
servers for more than 200 clients.
A security alarm notified employees and triggered a call to the fire department.
Lightning surged through the IT company’s network connections, and started a blaze which
destroyed their network closet.
But, Cantey’s clients never felt the effects of the disruption as the business continuity plan moved
the client servers to a remote data center and scheduled continual data backups.
Answer: When all the business elements at the original site have returned
to normal operation.
Creating the BCDR Plan
Creating The BCDR Plan
Revise
Define scope
Report
Gather requirements
Test Analyze
• Ensure that security concerns are an intrinsic part of the plan from the start, rather than trying
to retrofit them into the plan after it is developed.
• Include clearly defined roles, risk assessment, classification, policy, awareness, and training.
Gathering Requirements
• Identify critical business processes and their dependence on specific data and services.
• Derive requirements from company's internal policies and procedures, applicable legal, statutory, or
regulatory compliance obligations.
• Influence the business strategy with acceptable RTO and RPO values.
Analyze
• Load capacity at the BCDR site: Can the site handle the needed load to run the application or
system, and is that capacity readily and easily available?
Can the BCDR site handle the level of network bandwidth required for the production services and
the user community accessing them?
• Contractual issues: Will any new CSP address all contractual issues and SLA requirements?
• Legal and licensing risks: There may be legal or licensing constraints that prohibit the data or
functionality to be present in the backup location.
Design
The actual technical evaluation of BCDR solutions is considered and matched to the company’s
requirements and policies.
Following are additional BCDR-specific questions that should be addressed in the design phase:
• How will the BCDR solution be invoked?
• What is the manual or automated procedure for invoking the failover services?
• How will the business be affected during the failover, if at all?
• How will the BCDR be tested?
Test the Plan
Define Scope
The actual production applications and hosting may be augmented or modified to provide additional hooks
or capabilities to enable the BCDR plan to work.
A Cloud Security Professional, performs a cost–benefit analysis to know the extent of modifications required
and the benefits those modifications bring to the overall organization.
• It reveals problems with the DR plan
• It allows for proactive troubleshooting
• It helps meet expectations
• It gives management the confidence of recovery in an emergency
Tabletop Exercise or Structured Walk-Through Test
• Primary objective: To ensure that critical personnel are familiar with the BCP and that the
plan accurately reflects the organization’s ability to recover from a disaster
Tabletop Exercise or Structured Walk-Through Test
Attendance of
business unit
management
representatives and
employees who play a
critical role in the BCP
process Discussion about
each person’s
responsibilities as
Individual and team defined by the BCP
training, which
includes a walk-
through of the step-
by-step procedures
outlined in the BCP Clarification and
highlighting of critical
plan elements as well
as problems noted
during testing
Walk-Through Drill
RPO helps determine how much information How long it would take for an interruption in service to kill an
must be recovered and restored. Another way of organization, measured in time. For instance, if a company
looking at RPO is to ask yourself, “how much would fail because it had to halt operations for a week, then its
data can the company afford to lose?” MTD is one week.
Time
• Once the testing has been completed, a full and comprehensive report detailing all activities,
shortcomings, changes made during the course of testing, and the results of the effectiveness of the
overall BCDR strategy and plan should be presented to the management for review.
• The management will evaluate the effectiveness of the plan, coupled with the goals and metrics deemed
suitable, and the costs associated with obtaining such goals.
• Once the management has a full briefing and the time to evaluate the testing reports, the iterative
process can begin with the changes and modifications to the BCDR plan.
Types of Testing
Uptime is the time when the actual server is up and powered on and available
to the system administrators
Availability is when the servers are “present and ready for use”, “willing to
serve or assist.”
Note: Having a server up and powered on does nothing for your company if the actual services that your site
requires are not up.
Real-World Scenario
Generator/Fuel Storage
Fence
Parking
Road
In 2011, Netflix revealed the Simian Army: a set of testing and monitoring applications that randomly
disables Netflix’s production instances to ensure it can withstand failure and provide services without
any customer impact.
• Chaos Gorilla simulates availability zone outage to verify that services automatically re-balance
without impact.
• Latency Monkey induces artificial delays in RESTful client-server communication layer to simulate
service degradation.
• Conformity Monkey finds instances that don’t adhere to best practices and shuts them down.
• Doctor Monkey taps into health checks that run on each instance to detect unhealthy instances.
• Janitor Monkey ensures that Netflix’s cloud environment is running free of clutter and waste.
In February 2016, unknown hackers stole more than $81 million from Bangladesh Bank's account at
the Federal Reserve Bank of New York.
Investigations revealed that Bangladesh's central bank did not have a firewall and used $10 switches
to network computers connected to the SWIFT global payment network.
This made it easy for hackers to steal credentials for the SWIFT messaging system and use malware
to attack the computers used to authorize transactions.
The lack of sophisticated hardware made it harder to trace the origin of the hacks.
The hackers covered their tracks by installing malware on the bank's network to prevent workers
from discovering fraudulent transactions quickly.
Question: What precautions could banks use to protect their SWIFT systems?
Answer: Banks must build multiple firewalls to isolate the Swift system from its
other networks and keep the machines physically isolated in a separate locked
room.
Key Takeaways
Service Requestor
SOAP
Service Provider
Simple object access protocol (SOAP): A protocol and standard for exchanging information between
web services in a structured format.
Features
• Language, platform, and transport independent
• Distributed enterprise environment
• Standardized
• Pre-build extensibility (WS-* standards)
• Built-in error handling
• Automated
329
API Types
Client
HTTP REST Server
Representational State Transfer (REST): A software architecture style consisting of guidelines and
best practices for creating scalable web services.
Features:
SOAP WS
Uses only HTTP Uses SOAP envelope and HTTP to transfer the data
August 31, 2014: Hackers publicized around 500 private pictures of various celebrities. Due to
a lack of two-factor authentication, hackers were able to continuously attempt to sign into the
celebrities' iCloud accounts and guess password combinations.
Apple has now fixed this vulnerability in its Find My iPhone feature.
Question: What could have prevented the brute force attack on that API?
5 Complexities of integration
6 Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure
On-premises does not always transfer (and • Present performance and functionality may not be
vice versa) transferable
• Current configurations and applications may be hard to
replicate
Not all apps are “Cloud-Ready”
• Not developed for cloud-based services
• Not all applications are forklifted to the cloud
Complexities of integration
Forklifting: The process of migrating an entire application
to the cloud with minimal code changes.
Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Complexities of integration
Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure
Complexities of integration
Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure
Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure
Complexities of integration
• Multitenancy
• Third-party administrators
Overarching challenges
Encryption Dependency Awareness
Awareness of Encryption Dependencies
• Encryption of data at rest: Addresses encryption of data stored within the CSP network
• Encryption of data in transit: Addresses security of data while it traverses the network
• Data masking (or data obfuscation): The process of hiding original data using random characters
or data
Business Scenario
Real World Scenario
A large DNS provider experienced a network attack by thousands of Internet of Things (IoT)
devices infected with Botnet malware.
When combined, these devices attacked a DNS Provider. As a result, some of the largest online
retailers in the country were offline for several hours.
This went on for almost a day, and cost estimates were well into millions of dollars in lost sales.
Planning and
Requirements Defining
Analysis
Designing Developing
Testing Maintenance
Disposal Phase
Crypto-shredding: The deletion of the key used to encrypt data that’s stored in the cloud
346
Real-World Scenario: Revoking Access
A10: Unvalidated Redirects and Forwards (Dropped) A10: Insufficient Logging and Monitoring
The Notorious Nine
1 2 3
4 5 6
7 8 9
Account Shared
Highjacking Malicious Insiders Technology Issues
350
Threat Modeling
Threat Modeling
Classification
Potential Actors
Threat
Attack Surface
Potential Migration
Thread modeling is a process by which potential threats are modified, identified, enumerated, and prioritized from
an attacker’s point of view
The purpose is to provide defenders with the probable attacker’s profile, likely attack vectors, and the assets
desired by the attacker
352
Threat Modeling
353
STRIDE
STRIDE is a threat classification model developed by Microsoft for thinking about computer security
threats.
Spoofing identity
Repudiation
Denial of Service
Elevation of Privilege
354
Supplemental Security Devices
Supplemental Security Devices
Recently a client was experiencing a massive Layer 7 DDOS attack, generating tens of thousands of
random HTTP requests per second to their web server.
HTTP flood attacks have little dependency on bandwidth allowing them to easily take down a server.
With this type of attack, the server-level caching is unable to stop it. The incoming URLs are dynamic and
the application forces a reload of the content for every new request, not in the cache.
The solution an emergency DDOS protection feature uses is JavaScript to prevent malicious bots from
hitting the site. An intelligent log correlation system pinpoints the IP addresses and traffic pattern,
blocking the incoming attack at the edge via the web application.
• The standard security • A protocol that ensures privacy • A network that is constructed
technology for establishing an between applications and by using public wires (usually
encrypted link between a web users on the Internet the Internet) to connect to a
server and a browser • Ensures that no third party private network, such as a
• Ensures that all data passed eavesdrops or tampers a company’s internal network
between the web server and message during the
browsers remains private and communication between a
integral server and a client
• It is successor to SSL
Data-at-Rest Encryption
• A method for encrypting all the • A method for encrypting a • A method for encrypting a
data associated with the single volume on a drive single file or directory on a
operation and use of a virtual drive
machine
Sandboxing and Application Virtualization
Sandboxing
Sandboxing is the segregation and isolation of information or processes from other components within
the same system or application.
Where is it used?
362
Application Virtualization
• It can be used to isolate or sandbox an application to see the processes which are performed by applications.
• Examples:
o Wine, which allows for some Microsoft applications to run on a Linux platform
o Microsoft App-V
o XenApp
363
Federated Identity Management
Federated Identity Management
A model that enables companies with different technologies, standards, and use-cases to share their
applications by allowing individuals to use the same login credentials across security domains.
The main purpose is to allow registered users of a certain domain to access information
from other domains without having to provide extra administrative user information.
365
Federation Standards
Security Assertion
• An XML-based framework for communicating user authentication,
Markup Language
entitlement, and attribute information
(SAML) 2.0
• Lets developers authenticate their users across websites and apps without
OpenID Connect
having to own and manage password files
3 Browser redirects to
SSO URL
Acme Corp. set up a relationship with Google that ACME parses SAML
request,
would allow Acme Corp. to do just the same using 4 authenticates user
SAML.
Acme
Acme returns encoded
5 generates
Whenever a user attempts to access the corporate SAML response to SAML response
browser
Gmail account, Gmail redirects their request to
Browser sends SAML
6
Acme’s SSO service, which authenticates the user response to Google
6
and relays a SAML response.
Google verifies
7 SAML response
367
SAML Authentication
368
Case Study
• At the University of Arizona Libraries, study room management had become a time-consuming task.
• In late August 2012, they decided to implement a web application which enabled the students to find
and reserve unmediated study spaces from their smartphones anytime.
• For the University of Arizona, an important feature was the integration of Shibboleth, an open-source,
single sign-on authentication system for complex federated environments based on the Security
Assertion Markup Language (SAML).
369
Identity and Access Management
Identification, Authentication, and Authorization
• The general principle is to add an extra level of protection to verify the legitimacy of a transaction.
• To be a multifactor system, users must be able to provide at least two of the following requirements:
bunq, the world’s first mobile-only bank faced the issue of how to securely authenticate its customers.
The company chose face biometrics, as it is supported on smartphones. However, face biometrics are
easy to spoof and have high rates of false rejection.
An alternative to this was Veridium’s 4 Fingers TouchlessID which collects the user’s fingerprints from
the rear camera and the LED flash of a smartphone.
Switching from face to hand recognition reduced complaints of failed authentication attempts by
up to 90%.
Answer: FAR is the measure of the likelihood of the biometric security system
incorrectly accepting an access attempt by an unauthorized user.
Case Study
• In January 2010, Google announced that the Chinese government had been targeting it to gain access
to the email accounts of human rights activists working in China and around the world.
• The attacks led to a number of changes at Google, in terms of security infrastructure and policy. As a
result, Google decided to shut down operations in China.
• Later that year, Google introduced its two-factor authentication system for business accounts, and
then to the general public.
• In the years since, major companies such as Microsoft, Twitter, Apple, and Amazon offer 2FA options
across a multitude of online platforms. Competition on security features, high-profile breaches, and
the everyday occurrence of account hijackings have led to demands for better authentication.
• However, many consumers are not availing these options as they are less convenient and more
complex than a password.
376
Cloud Access Security Broker (CASB) and ISO/IEC 27034-1
Cloud Access Security Broker (CASB)
A Cloud Access Security Broker (CASB) is an on-premise or cloud-based security policy enforcement point that is
placed between cloud service consumers and cloud service providers. It ensures that the network traffic between
on-premise devices and the cloud provider complies with the organization's security policies. CASB acts as a
gatekeeper allowing the organization to extend the security controls of there on-premise infrastructure to the
cloud.
Auto-discovery is used by CASB to identify cloud applications in use and identify high-risk applications, users, and
other key risk factors. Example: Some CASBs can benchmark an organization’s security configurations against
industry best practices and regulatory requirements.
Organizations are increasingly taking help of CASB vendors to address cloud service risks, enforce security
policies, and comply with regulations.
ISO/IEC 27034-1
Provides one of the most widely accepted set of standards and guidelines for secure application development
Key elements:
379
Application Security Testing
Application Security Testing
• A white-box test
• Performs an analysis of the application source code, byte code, and binaries without
executing the application code
• Determines coding errors and omissions that are indicative of security vulnerabilities
• Can be used to find XSS errors, SQL injection, buffer overflows, unhandled error
conditions, and potential backdoors
381
Application Security Testing
• A black-box test
• Discovers individual execution paths in the application being analyzed
• DAST is used against applications in their running state
• Considered effective when testing exposes HTTP and HTML interfaces of web applications
382
OWASP Testing Guide
383
Software Supply Chain Management
Software Supply Chain (API) Management
End Users
• Cloud-based systems and modern web applications consist of
software, API calls, components, and external data sources.
• The integration of external API calls and web services allows an APPS
application to leverage enormous numbers of external data
sources and functions.
• These external sources are outside the control of the
developer or an organization. Web APIs
• Software components produced without secure software
development guidance can create security risks throughout
the supply chain.
On-Premise In-Cloud
385
Real-World Scenario: Hacking of Jeep Cherokee
Hackers Charlie Miller and Chris Valasek demonstrated a Digital Crash-Test Dummy to
highlight vulnerabilities in Internet-connected entertainment and navigation systems
featured in many new vehicles.
Following this incident, Chrysler released a software update to improve vehicle security and
recalled 1.4 million recent models.
Answer: White hat hackers use their skills to improve security by exposing
vulnerabilities before malicious hackers known as black hat hackers can detect and
exploit them.
Key Takeaways
You are now able to:
Identify the training and awareness required for successful cloud
application security deployment
Describe the software development life cycle process for a cloud
environment
Demonstrate the use and application of the software development
life cycle
Identify the requirements for creating secure identity and access
management solutions
Describe specific cloud application architecture
Host hardening
Host patching
Host lockdown
To achieve this, remove all nonessential services and software from the
Host hardening
Host patching To achieve this, install all patches provided by the vendors whose
hardware and software are being used to create the host server.
Host lockdown
Host hardening To achieve this, implement host-specific security measures, such as:
• Blocking non-root access to the host under most circumstances
Host patching
(local console access only via a root account)
Host lockdown • Allowing the use of secure communication protocols and tools to
access the host remotely, such as PuTTY with secure shell (SSH)
Secure ongoing configuration maintenance
• Configuring and using host-based firewall
Trusted Platform Module (TPM) • Using Role-Based Access Controls (RBACs) to limit user access to
host and what permissions they have
Hardware Security Modules (HSM)
Best Practices for Servers
Host hardening
Host lockdown • Patch management of hosts, guest OSs, and application workloads
running on them
Secure ongoing configuration maintenance
• Periodic vulnerability assessment scanning of hosts, guest OSs, and
Features of TPM:
Host patching
• Provides full-disk encryption capabilities
Host lockdown • Provides integrity and authentication to the boot process
• Keeps hard drives locked until system verification and
Secure ongoing configuration maintenance
authentication gets completed
Trusted Platform Module (TPM)
TPM includes a unique RSA key burned into it, which is used for
Hardware Security Modules (HSM)
asymmetric encryption.
Additionally, it can generate, store, and protect other keys used in the
encryption and decryption process.
Best Practices for Servers
Host hardening HSM is a security device which manages, generates, and securely
stores cryptographic keys.
Host patching
High performance HSMs are external devices connected to a network
using TCP/IP.
Host lockdown Smaller performance HSMs are expansion cards which gets installed
within a server, or as devices which gets plugged into computer ports.
Secure ongoing configuration maintenance
HSMs can be added to a system or a network, but if a system didn’t
ship with a TPM, it’s not feasible to add one later.
Trusted Platform Module (TPM)
• Implementing a virtualized system allows the storage traffic to be segregated and isolated on its own LAN
• Prioritizing resolution of latency issues for storage systems over typical network traffic
• Offering a built-in encryption capability by storage controllers ensures confidentiality of the data transiting
the controller
In March 2011, Health Net, a provider of managed healthcare services, began notifying
1.9 million patients that nine server drives containing personal and health data were
stolen from a data center managed by IBM in Rancho Cordova, California.
The missing drives contained names, addresses, social security numbers, financial
information, and health data of customers, employees, and healthcare providers.
Real-World Scenario: Data Theft
This incident was the second data breach by Health Net in two years. In 2009, Health Net’s Connecticut office
had lost a portable hard drive containing health and financial data on 1.5 million policyholders. The
Connecticut attorney filed a lawsuit in federal court, as the company had not only failed to protect the
personal data but also did not notify affected individuals in a timely manner. Health Net agreed to pay $2.5M
in damages and offer stronger consumer protections to settle the lawsuit.
iSCSI is a protocol that uses TCP to transport SCSI commands. It enables the use of the existing TCP/IP
infrastructure as a SAN.
iSCSI makes block devices available via the network; unlike Network Attached Storage (NAS), which
presents devices at the file level.
iSCSI must be considered as a local-area technology, not a wide-area technology, because of latency
issues and security concerns.
Layering two VLANs is a good way to segregate iSCSI traffic from general traffic.
Initiators and Targets
Initiator
The consumer of storage, typically a server with an adapter card in it is called a Host Bus Adapter (HBA). The
initiator commences a connection over the fabric to one or more ports on your storage system, which are called
target ports.
Target
These are the ports on the storage system that deliver storage volumes (called target devices or Logical Unit
Numbers [LUNs]) to the initiators.
Oversubscription
Oversubscription occurs when more users are connected to a system that can be fully supported at the
same time.
Networks and servers are almost always designed with some amount of oversubscription with the
assumption that not all users need the service simultaneously.
Oversubscription is permissible on general-purpose LANs.
Best practices:
• To have a dedicated local area network (LAN) for iSCSI traffic
• Not to share the storage network with other network traffic, such as management, fault tolerance, or
vMotion/Live Migration
iSCSI Implementation Considerations
Virtual switches connect the physical Network Interface Cards (NICs) in the host server to the
virtual NICs in VMs. Switches support 802.1Q tagging, which allows multiple VLANs to be used on a
single physical switch port to reduce the number of physical NICs needed in a host.
Best practices:
• Utilizing several types of ports and port groups separately rather than all together on a single
virtual switch offers higher security and better management.
• Achieving virtual switch redundancy by assigning at least two physical NICs to a virtual switch,
with each NIC connecting to a different physical switch.
Other Virtual Network Security Best Practices
• Moving live VMs from one host to • Locking down the access to virtual
another does so in clear text by switches so that an attacker
using specific network cannot move VMs from one
01 03 network to another
• Helping “sniff” the data or perform a • This also helps VMs not to straddle
man-in-the-middle attack when a 02 an internal and external network
live migration occurs
Defense in depth
Implement the tools used to manage the host as part of a larger architectural design that mutually reinforces
security at every level of the enterprise.
Access control
Secure the tools and tightly control and monitor access to them.
Monitor and track the use of the tools throughout the enterprise to ensure proper usage.
Maintenance
Update and patch the tools as required to ensure compliance with all vendor recommendations and security
bulletins.
Cloud Environments: Sharing a Physical Infrastructure
Exposing your data in an environment shared with other companies can give the government reasonable
cause to seize your assets, because another company has violated the law.
Compatibility
Storage services provided by one cloud vendor may be incompatible with another vendor’s services should
you decide to move from one to the other.
Cloud Environments: Sharing a Physical Infrastructure
Control
If information is encrypted while passing through the cloud, does the customer or cloud vendor control the
encryption and decryption keys?
Most consumers probably want their data encrypted both ways across the internet using the secure sockets
layer (SSL) protocol.
Log data
SaaS suppliers have to provide log data in a real-time, straightforward manner, for their administrators and
customers. This is done because the SaaS provider’s logs might not be externally accessible which might make
monitoring a difficult task.
Cloud Environments: Sharing a Physical Infrastructure
Access to logs is required for PCI DSS compliance. Security managers need to make sure to negotiate access
to the provider’s logs as part of any service agreement.
Cloud applications constantly experience addition of new features. Users must keep applications up-to-date
to make sure that they are well protected.
A secure software development life cycle may not be able to provide a security cycle that keeps up with
changes that occur so quickly. This means that users must constantly upgrade because an older version may
not function or protect the data.
Cloud Environments: Sharing a Physical Infrastructure
Failover technology
Administering failover technology is a component of securing the cloud that is often overlooked.
Security needs to be moved to the data level so that enterprises can be sure that their data is protected
wherever it goes.
Compliance
SaaS makes the process of compliance more complicated because it is difficult for a customer to discern
where his data resides on a network controlled by the SaaS provider, or a partner of that provider, which
raises all sorts of compliance issues of data privacy, segregation, and security.
Some countries have strict limits on what data can be stored about its citizen and for how long.
Cloud Environments: Sharing a Physical Infrastructure
Regulations
Compliance with government regulations, such as the Sarbanes-Oxley Act (SOX), the Gramm-Leach-Bliley Act
(GLBA), the Health Insurance Portability and Accountability Act (HIPAA), and industry standards such as the PCI
DSS are much more challenging in the SaaS environment.
Outsourcing
Outsourcing means losing significant control over data.
It is necessary to work with a company’s legal staff to ensure that appropriate contract terms are in place to
protect corporate data and provide for acceptable SLAs.
Cloud Environments: Sharing a Physical Infrastructure
Placement of security
Cloud-based services result in many mobile IT users accessing business data and services without traversing
the corporate network.
Virtualization
Virtualization efficiencies in the cloud require VMs from multiple organizations to be colocated on the same
physical resources.
Administrative access is through the Internet rather than the controlled and restricted direct or on-premises
connection that is adhered to in the traditional data center model.
VM
The dynamic and fluid nature of VMs makes it difficult to maintain the consistency of security and ensures
that records can be audited.
Proving the security state of a system and identifying the location of an insecure VM is challenging. The
colocation of multiple VMs increases the attack surface and risk of VM-to-VM compromise.
Real-World Scenario: EC2 Vulnerability
On April 8th, 2011, Amazon sent out an email to its Elastic Compute Cloud customers acknowledging the
presence of compromised images in the Amazon AMI community. AMI stands for Amazon Machine
Image, a pre-configured virtual guest.
The infected image comprised of Ubuntu 10.4 server, running Apache, and MySQL along with PHP
especially used for hosting a website.
The “certified pre-owned” had the image publisher’s public key left in root .ssh authorized keys and home
ubuntu .ssh authorized keys, allowing the publisher to log into any server instance running his image as
the root user.
Real-World Scenario: EC2 Vulnerability
The publisher claimed this was purely an accident, a mere result of his inexperience. While this may or
may not be true, this incident exposes a major security hole within the EC2 community.
Answer: Organizations must analyze the risk associated with pre-configured cloud-based systems, and
consider the option of configuring the system from the “ground up,” beginning with the base operating
system.
Securing Network Configuration (Part 1)
Securing Network Configuration
VLAN TLS
DNSSEC
DNS
IPSec
Securing Network Configuration
VLAN
• VLAN is an Institute of Electrical and Electronics Engineers (IEEE) standard networking scheme with specific
tagging methods that allow routing of packets to only those ports that are part of the VLAN.
• VLANs do not guarantee that data will be transmitted securely and will not be tampered with or
intercepted while on the wire.
Securing Network Configuration
TLS
DNS
DNS
• Data modification: An attempt by an attacker to spoof valid IP addresses in IP packets that the attacker
has created. This gives these packets the appearance of coming from a valid IP address in the network.
• Redirection: When an attacker can redirect queries for DNS names to servers that are under the control
of the attacker
• Spoofing: When a DNS server accepts and uses incorrect information from a host that has no authority to
give that information
Securing Network Configuration
DNSSEC
• DNSSEC is a suite of extensions that adds security to the Domain Name System (DNS) protocol by enabling
DNS responses to be validated.
• DNSSEC provides origin authority, data integrity, and authenticated denial of existence.
• In the presence of DNSSEC, the DNS protocol is much less susceptible to certain types of attacks.
• Validation of DNS responses occurs through the use of digital signatures that are included with DNS
responses.
• It does not address confidentiality or availability.
Securing Network Configuration
IPSec
• IPSec includes protocols for establishing mutual authentication at the beginning of the session and
negotiating cryptographic keys to be used during the session.
• IPSec supports network-level peer authentication, data origin authentication, data integrity, encryption,
and replay protection.
• A major difference between IPSec and other protocols such as TLS is that IPsec operates at the internet
network layer rather than the application layer, allowing end-to-end encryption of all communications and
traffic.
Real-World Scenario: DNS Attack
In 2013, the New York Times' website was hit with a Domain Name System
(DNS) attack and became inaccessible.
The attack resulted in hackers changing the DNS (Domain Name System)
records for several
domain names including nytimes.com. This resulted in traffic to those
websites being
temporarily redirected to a server under the attackers’ control.
Real-World Scenario: DNS Attack
The affected DNS records were reverted back soon after, but this highlights the fact that
with most DNS providers and for most DNS records, there is no real security on DNS addresses.
Question: Could the New York Times have prevented this attack?
Answer: They could have done a registry lock which makes it very difficult for anyone to alter the
DNS records that govern the links between a domain name and an IP address.
Clustered Host
Clustering
DRS provides high availability, scaling, management, workload distribution, and balancing of jobs and
processes.
As loads change, virtual hosts can be moved between physical hosts to maintain proper balance, which:
• Provides highly available resources to workloads
• Balances workload for optimal performance
• Scales and manages computing resources without service disruption
Dynamic Optimization (DO)
Dynamic optimization is the process through which the cloud environment is constantly maintained to
ensure resources are available when and where needed.
With auto-scaling and elasticity, cloud environments are ensured to always be different from one
moment to the next, and through automated means without any human intervention or action.
With rapid elasticity, capabilities can be rapidly and elastically provisioned, in some cases automatically,
to scale rapidly outward and inward, commensurate with demand.
Storage Cluster
Clustered storage is the use of two or more storage servers working together to increase performance,
capacity, or reliability.
Clustering distributes workloads to each server, manages the transfer of workloads between servers, and
provides access to all files from any server regardless of the physical location of the file.
Maintenance mode refers to the physical hosts and times when upgrades, patching, or other
operational activities are necessary.
All operational instances are removed from the system/device before entering maintenance mode.
While in maintenance mode, customer access is blocked and alerts are disabled (although logging is still
enabled).
Note: It is important to test if the system or device has all the original functionality necessary for
customer purposes before moving it back to normal operation from maintenance mode.
Patch Management
Patch management is the process of identifying, acquiring, installing, and verifying patches for
products and systems.
Best Practices:
• Test the patches before pushing them out
• Perform system backup before applying patch
Patch Management
Manual:
• Trained and experienced personnel are more trustworthy than a mechanized tool and might understand
when anomalous activity occurs
• Slower than automated type and may not be as thorough
Patch Management
Performance monitoring is essential for the secure and reliable operation of a cloud environment.
Outsourcing the monitoring function to a trusted third party for the 24/7 monitoring of the cloud
environment
Use the following approaches to assess risk while outsourcing:
• Having HR check references
• Examining the terms of any SLA or contract being used to govern service terms
• Executing some form of trial of the managed service in question before implementing into
production
Real-World Scenario: Sony Hack Attack
In 2015, Sony Pictures suffered the worst corporate hack attack in history when attackers going by
the name Guardians of Peace, employed a previously undisclosed vulnerability to break into Sony’s
system.
These types of vulnerabilities are known as Zero-Day because the original programmer has zero days
after learning about it to patch the code before it can be exploited in an attack. These flaws are
usually the result of errors made during the writing of the software, giving an attacker wider access to
the rest of the software. More often, they remain undetected until an attack has occurred.
Real-World Scenario: Sony Hack Attack
The attackers first crippled its network and then released sensitive corporate data on public file-sharing
sites, including four unreleased feature films, business plans, contracts, and the personal emails of top
executives.
True False
When dealing with encrypted traffic, an IPS and IDS face considerable challenges because
signature-based analysis is effectively eliminated as the system cannot perform inspection.
Honeypot
It is a computer system that is set up to act as a decoy to lure cyber attackers and to detect, deflect, or study
attempts to gain unauthorized access to information systems.
Honeypot is isolated from the production system. It is designed in such a way that the attacker thinks that it is
part of the original production system and contains valuable data. However, the data on a honeypot is bogus
data, and it is set up on an isolated network so that any compromise of it cannot impact any other systems within
the environment.
Honeypot
Honeypot
advantages
• Divert Attackers Effort: An intruder will spend energy on a system that causes no harm to production
servers.
• Educate: The properly designed and configured Honeypot provides data on the methods used to attack
systems.
• Detect Insider Attacks: Since most IDS systems have difficulty detecting insider attacks, Honeypot can
provide valuable information on the patterns used by insiders.
• Create Confusion for Attackers: The bogus data Honeypot and provides to attackers can confuse and
confound.
• Deter Attacks: Fewer intruders will invade a network that know is designed to monitor and capture their
activity in detail.
Honeypot
Honeynet is an extension of the honeypot. It groups multiple honeypot systems to form a network that is used
in the same manner as the honeypot, but with more scalability and functionality.
Enticement means that you have made it easier for the bees (the attackers) to conduct their normal activity.
Enticement is not necessarily illegal, but does raise ethical arguments and may not be admissible in court.
Entrapment is where the intruder is induced or tricked into committing a crime that the individual may have had
no intention of committing.
Entrapment is illegal and cannot be used when charging an individual with hacking or unauthorized activity.
Security Information and Event Management (SIEM)
SIEM software products and services combine security information management (SIM) and security event
management (SEM).
SIEM gathers logs from various devices (servers, firewalls, routers, etc.) and attempts to correlate the log data
and provide real-time analysis of security alerts.
Service models
IaaS
Cloud customer is responsible for log collection and maintenance as far as virtual machines and
application logs are considered. The SLA between the cloud customer and the cloud provider will have
to clearly spell out what logs are available within an IaaS environment, who has access and is
responsible to collect them, who will be responsible for maintaining and archiving them, as well as
what the retention policy is.
Log Management
Service models
PaaS
The cloud provider needs to collect the operating system logs and possibly logs from the application;
depending on how PaaS is implemented and what application frameworks are used. Therefore, the
SLA will have to clearly define how those logs are collected and given to the cloud customer and what
degree of support is available with such efforts.
Log Management
Service models
SaaS
All logs will have to be displayed by the cloud provider via SLA requirements. With many SaaS
implementations, to some degree the logs are exposed by the application itself to administrators or
account managers of the application. These logs might be limited to a set of user functions or just
high-level events. Anything more detailed should be a part of the SLA.
Orchestration
Orchestration
Orchestration pertains to the enormous use of automation for tasks such as provisioning, scaling,
allocating resources, and even customer billing and reporting.
• Cloud services are intended to scale-up arbitrarily and dynamically, without requiring direct human
intervention to do so.
• Cloud service delivery includes fulfillment assurance and billing.
• Cloud services delivery entails workflows in various technical and business domains.
Availability of Guest OS
High Availability
High availability (HA) is a characteristic of a system, that aims to ensure an agreed level of
operational performance, usually uptime that is higher than the normal period is achieved.
Features of HA:
• It’s goal is to minimize downtime, not prevent it
• HA systems are bound by SLA
• The failure of one or more components does not affect the performance of the system
• It eliminates a single point of failure
• It detects failure as they occur
Fault Tolerance
Fault tolerant systems offer higher levels of resiliency and recovery. They use a high degree
of hardware redundancy and specialized software to provide near-instantaneous recovery
from any single hardware or software unit failure.
Features of fault tolerance:
• It’s Expensive
• It addresses only hardware failures, not software failures
• The performance of the system could degrade when one or more component fails
• RAID 1, for example, by mirroring data across multiple disks, provides fault tolerance from
disk failures
• It runs a MySQL slave that can be promoted to the master if the master fails
High Availability vs. Fault Tolerance
Information
Security
Management
Continual
Continuity Service
Management Improvement
Management
Change Incident
Management Management
Management
Information
Security
Management
Continual
Continuity Service
Management Improvement
Management
Change management is an approach that
allows organizations to manage and control the
impact of change through a structured process. Change Incident
Management Management
Information
Continuity management, or business Security
Management
continuity management, is focused on Continual
planning the successful restoration of systems Continuity Service
Management Improvement
or services after an unexpected outage, Management
incident, or disaster.
Change Incident
Management Management
Management
Information
Security
Management
Continual
Continuity Service
Management Improvement
Management An incident is defined as any event that can
lead to the disruption of an organization’s
services or operations that impact either
Change Incident
internal or public users.
Management Management
Incident management is focused on limiting
the impact of these events on an organization
and their services, and returning their state to
full operational status as quickly as possible.
Real-World Scenario
The company uses an incident management system from Simba, which alerted the staff to the
fire, evaluated the impact of the incident, automatically activated incident management response
teams, and sent emergency alerts to Simba’s 1,600 Germany-based employees.
Real-World Scenario
Given the proximity of the fire to the company facility, these procedures included an orderly
emergency evacuation of the premises to a pre-arranged recovery site.
Despite best efforts, the fire did indeed reach the building, ultimately knocking out the entire
switching center. However, with an effective incident management system and the redundant
equipment put in place, combined with a redundant network design, the company was able to
fully restore service within six hours.
Configuration
Management
Problem Availability
Management Management
Capacity
Management
Management
Configuration
Management
Problem Availability
Management Management
Capacity
Management
Management
Configuration
Management
Problem Availability
Management Management
Capacity
Management
Management
Configuration
Management
Configuration
Management
Problem Availability
Management Management Capacity management is focused on the
required system resources needed to deliver
Capacity
performance at an acceptable level to meet
Management
SLA requirements, and doing so in a cost-
effective and efficient manner.
Risk Management Process: Framing Risk and Risk Assessment
Framing Risk
Framing risk is the first step in the risk management process designed to produce a risk-management
strategy intended to address how organizations assess, respond to, and monitor risk.
Risk Assessment
Risk assessment is the process used to identify, estimate, and prioritize information security risks.
• Single Loss Expectancy (SLE): Represents an organization’s loss from a single threat.
• Exposure Factor (EF): Percentage of loss that a realized threat could have on a certain asset
• Annualized Rate of Occurrence (ARO): Value for the estimated frequency of a specific threat occurring
within one year
• Annualized Loss Expectancy (ALE): Annual expected financial loss to an organization from a threat
?
Problem:
Hacker hacks a server, with the data encrypted. Consider the following conditions:
• Asset value= $6,000
• EF= 50%
• ARO= 10% chances of hacking in one year
Scenario
ABC Corp. has been experiencing increased hacking activity as indicated by firewall and IPS logs
gathered from its managed service provider. The logs also indicate that the company has
experienced at least one successful breach in the past 30 days. Upon further analysis of the breach,
the security team has reported to senior management that the dollar value impact of the breach
appears to be $10,000.
Senior management has asked the security team to come up with a recommendation to fix the issues
that led to the breach. The recommendation from the team is that the countermeasures required to
address the root cause of the breach will cost $30,000.
Scenario
Answer: Taking the loss encountered of $10,000 per month, the annual loss expectancy is $120,000.
Thus, the mitigation would pay for itself after three months ($30,000) and would provide a $10,000
loss prevention for each month after. Therefore, this is a sound investment.
Risk Response
Residual Risk:
It is the risk remaining after a risk treatment.
If the residual risk is unacceptable, the risk treatment process should be iterated.
Risk Monitoring
Digital forensics is the application of science to the identification, collection, examination, and analysis of data
while preserving the integrity of the information and maintaining a strict chain of custody for the data.
Chain of custody should clearly depict how the evidence was collected, analyzed, and preserved to be
presented as admissible evidence in court.
In a cloud, it is not obvious where a VM is physically located.
The investigator’s location and a VM’s physical location can be in different time zones.
Hence, maintaining a proper chain of custody is much more challenging in the cloud.
Cloud Forensics Challenges
Control over data: Cloud users have the highest level of control in IaaS and the least level of control in SaaS
Multitenancy: In the cloud, the forensics investigator may need to preserve the privacy of other tenants
Data volatility: Data residing in a VM is volatile because once the VM is powered off, all the data is lost
Evidence acquisition: Investigators are completely dependent on CSPs for acquiring cloud evidence
Accessing Information in Service Models
Servers √
Virtualization √
OS √ √
Middleware √ √
Runtime √ √
Data √ √ √
Application √ √ √
Access Control √ √ √ √
Process Flow of Digital Forensics
Challenges
• The seizure of servers containing files from many users creates privacy issues among the multitenants
homed within the servers.
• The trustworthiness of evidence is based on the CSP, with no ability to validate or guarantee on behalf of the
CCSP.
• Investigators are dependent on CSPs to acquire evidence.
• Technicians collecting data may not be qualified for forensic acquisition.
• Unknown location of the physical data can hinder investigations.
Network Forensics
Network forensics is defined as the capture, storage, and analysis of network events.
The idea is to:
• Capture: Capture every packet of network traffic and make it available in a single searchable database so
that the traffic can be examined and analyzed in detail
• Trace: Entire contents of emails, IM conversations, web-surfing activities, and file transfers can be recovered
and reconstructed to reveal the original transaction
Use cases:
• Uncover proof of an attack
• Troubleshoot performance issues
• Monitor activity for compliance with policies
• Source data leaks
• Create audit trails for business transactions
Communication with Relevant Parties
Communication with Relevant Parties
Communication between the provider, its customers and suppliers is critical for any environment
Vendors
Communication with vendors will be driven almost exclusively through contract and SLA requirements.
Communications about availability, changing policies, and system upgrades and changes.
Regulators
In early 2012, a large data breach took place on the servers of Utah’s Department of Technology Services (DTS). A
malicious hacker group from Eastern Europe succeeded in accessing the servers of DTS, compromising 780,000
Medicaid recipients and the Social Security Numbers (SSNs) of 280,096 individual clients.
The Utah DTS had proper access controls, policies, and procedures in place to secure sensitive data. However, in
this particular case, a configuration error occurred while entering the password into the system. The malicious
hacker accessed the password of the system administrator and gained the personal information of thousands of
users.
Real-World Scenario: DTS Data Breach
The biggest lesson from this incident is that even if the data is encrypted, a flaw in the authentication
system can render a system vulnerable.
The state spent about $9 million in total on remediation, including security audits, upgrades, and credit
monitoring for victims, in addition to $770/20 hours in resolution for each of the 122,000 victims.
Total fraud could amount to $406 million (Javelin Strategy and Research).
A security operations center is a team of expert professionals dedicated to preventing cybersecurity threats
The goal of a SOC is to monitor, detect, investigate, and respond to all types of cyber threats around the clock
SOC is essential part of protection plan and data protection system that reduces the level of exposure of
information system to both external and internal risk
Security Operations Center (SOC)
SOC types:
Virtual SOC: No
Combined Global or
dedicated facility,
SOC/NOC: One Command SOC:
geographically
team and facility is Dedicated SOC: An Monitors a wide
distributed team
dedicated to in-house, area that
members, and
shared network dedicated facility encompasses
often delegated to
and security many other
a managed service
monitoring regional SOCs
provider
Learning Objectives
The city benefitted from Google’s world-class security, reliability, and availability.
But, the Los Angeles Police Department (LAPD) informed that Google Apps could not meet the
FBI's strict security and privacy requirements for connecting to the bureau's national criminal
history database.
LAPD’s concern was Google couldn't or didn't want to subject overseas staff to the FBI's
background checks.
So, Los Angeles pulled the plug on the LAPD portion of its Gmail deployment and demanded that
Google pay the cost of maintaining the GroupWise email servers at the LAPD for the duration of
its contract.
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law
Federal Laws
State Law
Common Law These are the rules that govern relations between states or countries.
Components:
Criminal Law
Restatement (Second) of • Judicial decisions and the teaching of qualified publicists: Determine rules
Conflict of Laws of law
Legislative Concepts
International Law
Federal Laws
State Law
Common Law • Federal laws govern the entire country. For example, laws against
kidnapping and bank robbery.
Criminal Law • If a person robs a bank, they would commit a federal crime and would
therefore be subject to federal prosecution, and punishment.
Tort Law
• Often, states will handle these types of cases, as they have prescribed
Administrative laws for it.
Law
Privacy Law • Generally, the issues of jurisdiction and subsequent prosecution are
worked out in advance between law enforcement and court jurisdictional
Restatement (Second) of bodies.
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law
Federal Laws
State Law
Common Law • State law refers to the law of each state in the U.S.
• Examples of state law: Speed limits, state tax laws, criminal code, etc.
Criminal Law
• Federal laws are usually more comprehensive and may often
Tort Law supersede state laws.
Administrative
Law
Privacy Law
Restatement (Second) of
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law
Federal Laws
State Law
Legal system in countries like the United States, Canada, and the United
Common Law Kingdom emphasizes on determinant of laws and sets a judicial precedent.
Tort Law
• Criminal law
Administrative • Civil law or Tort law
Law
• Administrative or Regulatory law
Privacy Law
Restatement (Second) of
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law
Federal Laws
State Law
Restatement (Second) of
Conflict of Laws
Legislative Concepts
International Law
Federal Laws
State Law
• A body of rights, obligations, and remedies that sets out reliefs for persons
suffering harm due to the wrongful acts of others
Common Law
• Tort actions are not dependent on an agreement between the parties to a
Criminal Law lawsuit
Tort law serves four objectives:
Tort Law
• Compensates victims for injuries suffered by the culpable action or inaction of
Administrative others
Law • Shifts the cost of injuries to the person or persons responsible for inflicting them
Privacy Law • Discourages injurious, careless, and risky behavior in the future
• Vindicates legal rights and interests that are compromised, diminished, or
Restatement (Second) of emasculated
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law
Federal Laws
State Law
Common Law Laws and legal principles that address a number of areas include
international trade, manufacturing, environment, and immigration
Criminal Law
Tort Law
Administrative
Law
Privacy Law
Restatement (Second) of
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law
Federal Laws
State Law
Common Law
• Privacy is the right of an individual to determine when, how, and to
what extent one releases personal information.
Criminal Law
• Privacy law includes language indicating that personal information
Tort Law must be destroyed when its retention is no longer required.
Administrative
Law
Privacy Law
Restatement (Second) of
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law
Federal Laws
State Law
Common Law
• The restatement of conflict of laws is the basis for deciding which
laws are more appropriate when there are conflicting laws in
Criminal Law
different states.
Tort Law • The conflicting legal rules comes from US federal laws, the laws of the
states of US, or the laws of the other countries.
Administrative
Law
Privacy Law
Restatement (Second) of
Conflict of Laws
Intellectual Property Laws
Intellectual Property Laws
Patent provides a monopoly to the patent holder on the right to use, make, or sell an invention for a period of
time in exchange to making his invention public.
Patent Troll is a person or company which obtains patents to aggressively and opportunistically go after
another entity that tries to create something based upon them.
Intellectual Property Laws
Registered trademarks are associated with marketing. The purpose is create a brand identity that distinguishes the
source of products or services.
The trademark protects a word, name, symbol, sound, shape, and color.
• Protects the right of the creator of an original work to control the public distribution, reproduction, display,
and adaptation of that original work
• Covers many categories of work: pictorial, graphics, musical, dramatic, literary, pantomime, motion picture,
sculptural, sound recording, and architectural
• Allows the creator to hold the copyright when the work is created
• Ensures that the copyright lasts for either 70 years after the author’s death, or 120 years after the first
publication of a work for hire created under contract
Intellectual Property Laws
Academic Fair Use: Limited copies or presentations of copyrighted works for educational purposes
Critique: The work may be reviewed or discussed for assessing its merit and for critical reviews
Satire: A mocking sendup of the work created using a significant portion of the original work
Library Preservation: Libraries and archives make limited copies of original work to preserve it
Personal Backup: Single backup copy of legally purchased work for use if the original fails
Versions for People with Physical Disabilities: Specialized copies of licensed works for someone with disability
Intellectual Property Laws
• A trade secret is something that is proprietary to a company and important for its survival and profitability.
• Examples: Formulae used for a soft drink, such as Coke or Pepsi, a new form of mathematics, the source
code of a program, a method of making the perfect jelly bean, or ingredients for a special secret sauce.
• The organization must exercise due care and due diligence in the protection of their trade secrets.
• The most common protection methods are non-compete and non-disclosure agreements (NDA).
• A trade secret has no expiration date unless the information is no longer secret or no longer provides
economic benefit to the company.
Case Study
Spice Mobiles Ltd. and Samsung India Electronics Pvt. Ltd. vs. Somasundaram Ramkumar
Mr. Ramkumar holds a patent for the plurality of SIM cards in a single mobile, plurality of
Bluetooth devices in headphone and earphone jacks.
Mr. Ramkumar and his company filed a petition in the Madras high court claiming that the
companies importing dual SIM phones in India were using his patent.
In 2009, the Madras High Court restrained mobile phone and retailers from manufacturing and
selling multiple SIM-holding mobile phones.
Aggrieved Spice Mobiles Limited and Samsung India Electronics Pvt. Ltd. filed two applications in
the IPAB for revocation of the patent granted to Mr. Ramkumar.
Applicants challenged the validity of the patent on the ground of lack of novelty.
Business Scenario
Real World Scenario
Provisions in Section 146 in the Patents Act 1970, require a patentee to furnish proof to ensure
that the patented invention has been commercially worked in India. If the patent is not worked or
used in the territory of India, compulsory licensing may be invoked. The act also requires the
mandatory filing for a statement of working patent at the end of each financial year.
Patent holders who fail to file such a statement may be liable for a fine or imprisonment. This is a
huge deterrent for individuals or organizations who acquire patents with no intention to
manufacture or market that patent invention. The sole purpose is to make some quick money
through cease and desist orders and patent infringement litigations.
Business Scenario
Real World Scenario
Question: Those who abuse patent rights for the sake of licensing revenues and to
engage manufacturers in infringement suits to mostly seek damages are known as:
A contract is an agreement between parties to engage in some specified activity, usually for mutual benefit. A
breach is a dispute when you fail to perform according to the activity specified in the contract.
Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act
US Laws
Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act
Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act
1 2
Includes provisions for
Increases financial securing data, and
transparency of names the traits for
publicly-traded confidentiality, integrity,
corporations and availability
US Laws
Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act
1 2
Protects patient records and data
Mandates steep federal penalties for
known as electronic Protected Health
noncompliance
Information (ePHI)
US Laws
Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act
Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act
HHS' Office for Civil Rights investigated the breaches. It found that the company did not assess
the data risks and limit access to its information systems.
NERC
NERC Cyber Security Standard
The North American Electric Reliability Corporation (NERC) is a not-for-profit corporation designed to improve the
reliability and security of the bulk electric system in the United States, Canada, and Northern Mexico. NERC
develops and enforces mandatory standards that define requirements for reliable planning and operation of the
bulk electric system.
NERC Critical Infrastructure Protection (NERC-CIP) is a set of standards which specifies the minimum security
requirements for the bulk electric systems. NERC-CIP consists of 9 standards and 45 requirements covering
security requirements ranging from perimeter protection, cyber assets control, end-to-end accountability and
reliability, training, security management, and disaster recovery.
NERC Cyber Security Standard
In the United States, NERC and its regional entities routinely monitor compliance. A number of methods,
including regular and scheduled compliance audits, random spot checks, and any additional specific
investigations as warranted, are used to identify where the standard may have been violated.
Penalties for non-compliance with NERC CIP can result in fines of up to $1 million per day, per incident, until a
state of compliance is ultimately achieved.
Privacy Shield and Generally Accepted Privacy Principles (GAPP)
Privacy Shield
• The Privacy Regulation supersedes the Data Directive, replacing it with Privacy Shield.
• This program is stringent than Safe Harbor and includes new provisions like annual meetings
between the intelligence agency and law enforcement officials of the United States and the EU.
Generally Accepted Privacy Principles (GAPP)
1. Management
2. Notice
3. Choice and consent
4. Collection
5. Use, retention, and disposal
6. Access
7. Third party disclosure
8. Security for privacy
9. Quality
10. Monitoring and enforcement
Jurisdictional Difference in Data Privacy
Jurisdictional Differences in Data Privacy
Determining and understanding jurisdictional differences is very important in data privacy, because of the
widespread flow of data across borders. Jurisdiction law is complex in the absence of a single global
agreement on data protection.
The question of determining jurisdiction has been a source of debate and law reform.
• The US Child Online Privacy Protection Act (COPPA) extends to foreign service providers that direct their
activities to US children or knowingly collect information from US children.
• In Japan, Act on the Protection of Personal Information (APPI) (2017) states that if a data controller outside
of Japan has collected or collects personal information relating to Japanese citizens, then that foreign data
controller is be required to comply with key sections of the Japanese Act.
• The EU General Data Protection Regulation (GDPR) (2018) mandates that the companies which collect data
on citizens in European Union (EU) need to comply with strict new rules around protecting customer data.
Terminologies and e-Discovery
Laws, Regulations, and Standards
Terminologies:
The legal rules created by The rules created by The framework and guidelines
government entities departments of the created by nongovernmental
government or external entities organizations for businesses to
empowered by the government follow
E-Discovery: It refers to the process of identifying and obtaining electronic evidence for either prosecutorial
or litigation purposes.
Forensic Requirements and PII
Forensic Requirements
• Gap analysis: Benchmarks and identifies relevant gaps against specified frameworks or
standards
• The objective is to detect and report gaps or risks that affect the CIA of information assets.
Note: Audit findings from an unbiased external and independent person/agency should be considered valid
and trustworthy.
SOC Reports
Refers to the chronological documentation or paper trail, showing the custody, control, transfer, analysis, and
disposition of physical or electronic evidence.
Being able to demonstrate a strong chain of custody is very important for making an argument in court.
Vendor Management
CSA Security, Trust, and Assurance Registry (STAR)
The cloud security alliance star program was created to establish a first step in displaying transparency and assurance for
cloud based environments.
Self-Assessment
1
CSA CCSM
! https://cloudsecurityalliance.org/star
Cloud Computing Policies and Risk Attitude
Cloud Computing Policies
Policy Examples
• Password policies: If the organization’s policy requires an eight-digit password, is this true for the CSP?
• Remote access: If two-factor authentication is used to access network resources, is this true for the CSP?
• Encryption: If minimum encryption strength and relevant algorithms are required, is it met by the CSP or a
potential solution?
• Third-party access: If a third-party accesses cloud-based services or resources, is it logged and traced?
Cloud Computing Policies
Policy Examples
• Segregation of duties: Are controls required for the segregation of key roles and functions? Can these be
enforced and maintained on the cloud?
• Incident management: Are actions and steps undertaken for communication fulfilled when cloud-based
services are in scope?
• Data backup: Is data backup in line with backup requirements listed in relevant policies?
Risk Attitude
Risk Appetite The degree of uncertainty an entity is willing to take in anticipation of a reward
Risk Tolerance The degree, amount, or volume of risk that an organization or individual will
withstand
Risk Threshold The level of impact at which a stakeholder may have a specific interest
SLA
SLA Topics
It is similar to a contract signed between a customer and a CSP. The SLA forms a most crucial and fundamental
component of how security and operations will be undertaken.
Security and
Logging and
Availability Performance privacy of the
reporting
data
Suspension of
SLA Penalties
Service
Provider Disaster
Liability Recovery
Key SLA Elements
What are the number of risks Who will do what? Which mitigation techniques
and their potential effects? and controls reduce risks?
Assessment Regulatory
Risk appetite Risk
of risk requirements
environment frameworks
ISO/IEC
31000:2009
European
Network and
Information Three main risk frameworks
Security Agency
(ENISA)
National
Institutes of
Standards and
Technology
(NIST)
Risk Frameworks
ISO/IEC
31000:2009
European • Implementing ISO 31000:200916 does not address specific or legal requirements
related to risk assessments, risk reviews, and overall risk management.
Network and
Information
Security Agency • ISO 31000:2009 sets out terms and definitions, principles, a framework, and a
(ENISA) process for managing risk.
National
Institutes of
Standards and
Technology
(NIST)
ISO/IEC 31000:2009
ISO/IEC
31000:2009 The ISO/IEC 31000:2009 standard puts forth 11 principles of risk management:
• Risk management creates and protects value.
• Risk management is an integral part of the organizational procedure.
• Risk management is part of decision-making.
European • Risk management explicitly addresses uncertainty.
Network and • Risk management is systematic, structured, and timely.
Information
• Risk management is based on the best available information.
Security Agency
(ENISA) • Risk management is tailored.
• Risk management takes human and cultural factors into account.
• Risk management is transparent and inclusive.
National • Risk management is dynamic, iterative, and responsive to change.
Institutes of
• Risk management facilitates continual improvement and enhancement of
Standards and the organization.
Technology
(NIST)
European Network and Information Security Agency (ENISA)
ISO/IEC
31000:2009
National
Institutes of
Standards and
Technology
(NIST)
National Institutes of Standards and Technology (NIST)
ISO/IEC
31000:2009
The contract of a public cloud provider specified maximum monthly upload and download
parameters.
The terms for leaving at the end of any contract period included a timeframe for the
customer to migrate the data from the provider's data center.
Assuming the customer uploads x GB of data each month, there would be 12x GB of data at
the end of the contract. So, leaving the contract is costly for the customer as it involves
migrating 12x GB of data in the final month.
Question: How can the customer avoid the high cost of migrating the data to a new data
center at the end of the contract?
Answer: The contract should mention that limits wouldn't apply in the transition period.
Key Takeaways