Protect the Cloud
Cloud Reference Architecture
Cloud Service Customer
Cloud service user – uses cloud service
Cloud service administrator – tests and monitors cloud services, administers security of services, and provide usage reports; can also address problems
Cloud service business manager – overseas billing administration, purchases cloud services, and request audit reports is necessary
Cloud service integrator – connects existing systems to the cloud
Cloud Service Provider
Cloud service operations manager – prepares systems for the cloud, administers and monitors services, and provides audit data when requested
Cloud service deployment manager – gathers metrics on cloud services, manages deployment steps, and defines the environment
Cloud service manager – delivers, provisions, and manages the cloud services
Cloud service business manager – oversees business plans and customer relationships along with financial transactions
Customer support and care representative – provides customer service
Inter-cloud provider – responsible for peering with other cloud providers and manages federated services
Cloud service security and risk manager – manages risk and overseas security compliance
Network provider – responsible for network connectivity and management of network services
Cloud Service Partner
Cloud service developer – develops cloud services and validates those services
Cloud auditor – performs reviews and authors audit reports
Cloud service broker – obtains new customers, analyzes the marketplace, and secures contracts
Cloud Service Categories
Infrastructure as a Service (IaaS) – the cloud customer maintains substantial configuration control over processing, storage, and network resources
IaaS is the most basic customer service; most customization and control are available to the customer
Cloud provider maintains the underlying architecture
Cloud customer controls services deployed within the cloud: operating systems, storage, and applications
Limited customer control over network components
Customer can deploy arbitrary software and systems
Rapid provisioning and scalability
High availability
IaaS benefits:
Scalability
Cost of hardware ownership maintained by provider
No physical security requirements for customer
Location independence
Metered usage
“Green” data centers for customers
Platform as a Service (PaaS) – the cloud customer can deploy applications using programming services maintained by the cloud provider; the customer does not interact with the underlying cloud network (like Microsoft Azure)
Cloud provider is responsible for the hosting environment, including OS, programming libraries, and services
Customer is responsible for deploying their applications within that environment
Cloud provider is responsible for administering all host systems
PaaS benefits:
Auto-scaling as resources are needed
Multiple host environments: customer has choice of OS’
Flexibility to move applications between cloud providers
Ease of upgrades
Cost-effective
Ease of access
Relief from licensing obligations
Software as a Service (SaaS) – the cloud customer uses fully established applications provided by the cloud provider with minimal configuration options
Customer is provided a turnkey operation; cloud service provider handles all maintenance responsibilities
Cloud provider supplies a complete application to the customer
Cloud customer provisions user access to data per user’s requirements
Lowest support requirements for customer
Customer has limited configuration options
SaaS benefits:
Low support costs
No licensing obligations
Ease of use and administration
Standardization
Cloud Deployment Models
Public Cloud – provide services to any available user without restrictions beyond any financial considerations
Benefits:
Setup – is very easy and inexpensive for the customer
Scalability
Right-sizing resources – customers pay only for what they use
Private Cloud – is maintained and restricted to the organization that it serves; it may be local to the customer’s environment or it may be maintained on the cloud
Benefits:
Ownership retention
Control over systems
Proprietary data and software control
Community Cloud – a collaboration between similar organizations that combine resources to create a private cloud; it is comparable to a private cloud apart from multiple owners
Hybrid Cloud – combines attributes of both private and public cloud bottles to meet an organization’s requirements
Benefits:
Split systems for optimization – customer can divide operations between public and private systems
Retain critical systems internally
Scalability
Disaster recovery – the public half of the cloud can maintain redundant instances of critical systems while the private half is used for internal operations only
Universal Cloud Computing Attributes (“Cross cutting”)
Interoperability – is how easy to move an application from one system to another
Performance, Availability, and Resiliency – describe the quality of a cloud service provider’s offering
Portability – is how easy a system can be moved from one cloud provider to another
Service Level Agreements (SLAs) – a contract documenting the cloud provider’s performance requirements
Regulatory Requirements – are imposed on an organization either by regulations or standards
Security – level of protection driven by an organization’s requirements; can be implemented by defining baselines and minimum standards
Privacy – protected by an organization’s compliance with regulatory and legal obligations
Auditability – provides documentation of user activity and any required compliance
Governance – ensures that business and security obligations are met
Maintenance and Versioning – each cloud service category has different requirements that are defined in the SLA:
SaaS – cloud provider is responsible for all patching and upgrades
PaaS – cloud provider is responsible for maintaining programming services
IaaS – cloud provider is responsible for maintaining hardware only
Reversibility – assurance that when a cloud customer removes all data that there will be no remanence
Cross cutting is a longstanding concept in computer science that refers to the concerns or parts of a system that are interwoven with, and distributed among, other concerns or parts of the system. The cloud cross-cutting aspects often relate to the “-ilities” in a system, such as availability, resiliency, security, and scalability. All applications require one or more of the cross-cutting architectures, no matter what function the application provides.
Security Concepts Relevant to Cloud Computing
Cryptography
Encryption
Data in Transit
Data at Rest
Key Management
Remote Key Management Service – maintained by customer at their own location; provides full control to the cloud customer
Client-Side Key Management Service – in SaaS implementations, it is provided by the cloud provider but controlled by the customer
Access Control
Account Provisioning
Directory Services
Administrative and Privileged Access
Authorization
Data and Media Sanitization
Vendor Lock-in for Data – cloud customer is bound to a particular provider
Data Sanitization
Overwriting
Cryptographic erasing
Network Security
Virtualization Security
Type 1 Hypervisor – bare-metal deployment; no host operating system
Type 2 Hypervisor – virtualization is software-based; host OS is required
Common Threats
Data Breaches
Insufficient Identity, Credential, and Access Management
Unsecured Interfaces and APIs – APIs are typical to cloud applications and therefore are popular attack vectors
System Vulnerabilities
Account Hijacking
Malicious Insiders
Advanced Persistent Threats
Data Loss
Insufficient Due Diligence
Abusive Use of Cloud Services – attackers may exploit the vast resources of a cloud service to attack their target
Denial of Service
Shared Technology Issues – configuration management and auditing can minimize potential problems of multiple customers on the same system
Security Considerations for the Different Cloud Categories
Security Concerns for IaaS
Multitenancy – separation of systems must be enforced to prevent interference, either inadvertent or deliberate
Co-Location
Hypervisor Security and Attacks – malware on one system can attack others within the cloud
Network Security – traditional attacks can affect cloud systems
Virtual Machine Attacks
Virtual Switch Attacks
DoS Attacks
Security Concerns for PaaS
System Isolation – process isolation and the principle of least privilege should be enforced
User Permissions
User Access
Malware, Trojans, and Backdoors
Security Concerns for SaaS
Web Application Security – high availability of services is important
Data Policies
Data Protection and Confidentiality – process isolation must be enforced
Design Principles of Secure Cloud Computing
Cloud Secure Data Lifecycle
Create – data is generated from cloud-based services
Store – data is maintained online within the cloud
Use – data is accessed as part of program execution
Share – data must be available to all authorized users
Archive – data must be preserved in long-term storage when necessary
Destroy – data should be permanently and securely removed from a system when it is no longer needed
Cloud-based Business Continuity/Disaster Recovery Planning
Elements
Confidentiality, Integrity, and Availability
Critical Success Factors
Knowing the customer’s responsibilities vs. the CSP’s responsibilities
Knowing that these continuity and disaster recovery components that are covered by the SLA
Important SLA Components
No single points of failure
Migration to alternate systems should be possible within agreed-upon timeframes
Automated controls should be available to verify data integrity
Regular assessment of the SLA should be performed
Cost-Benefit Analysis – is performed when considering a cloud deployment
Resource Pooling and Cyclical Demands – to satisfy business requirements
Data Center Costs vs. Operational Expense Costs
Focus Change – cloud technology is different from traditional IT
Ownership and Control – loss of control over data
Cost Structure
Identify Trusted Cloud Services
ISO/IEC 27001 and 27001:2013 – methods and practices for IT security
14 control domains
114 controls
NIST SP 800-53 – ensures that appropriate security controls are applied to federal IT systems
Payment Card Industry Data Security Standard (PCI DSS) – practices for securing credit card transactions
Service Organization Control (SOC) – standards for evaluation and audit of the processing of financial information by companies in the service sector
SOC 1 – audit that focuses on financial information
Type 1 – at the time of the review; a “snapshot”
Type 2 – over a 6 to 12-month period
SOC 2 – audit that expands beyond basic financial data to review practices that address security, availability, processing integrity, confidentiality, and privacy; more detailed and therefore limited in its distribution
Type 1 – detailed review of implementation of controls
Type 2 – review of design of controls only
SOC 3 – audit that reviews achievement (only) of security practices; less detailed than SOC 2 and therefore readily available – seal of approval
Common Criteria – 7 EALs
FIPS 140-2 – criterion published by NIST that supports accreditation of cryptographic modules*
Level 1 – Basic security requirements are specified for a cryptographic module; one approved algorithm or security function shall be used
No specific physical security mechanisms are required
An example is a PC encryption board
Level 2 –requires the use of tamper-evident seals or pick-resistant locks on removable covers of the module
Seals are placed on a module so that it must be broken to attain physical access to the plaintext cryptographic keys and CSPs
Requires role-based authentication in which a module authenticates an operator to assume a specific role to perform a corresponding set of services
Level 3 – Security Level 3 attempts to prevent the intruder from gaining access to CSPs held within the cryptographic module
Physical security mechanisms are intended to have a high probability of detecting and responding to attempts at physical access, use or modification of the cryptographic module
The physical security mechanisms may include circuitry that zeroizes all plaintext CSPs (Critical Security Parameters) when the removable covers are opened
Security Level 3 requires identity-based authentication, enhancing the role-based authentication mechanisms specified for Level 2
A module authenticates and verifies that the identified operator is authorized to assume a specific role and perform a corresponding set of services
Level 4 – Physical security mechanisms provide complete protection around the cryptographic module to detect and respond to all attempts
Penetration results in the immediate zeroization of all plaintext CSPs
Protects against a security compromise due to environmental conditions outside of the module's normal operating ranges for voltage and temperature; intentional excursions beyond the normal operating ranges may be used by an attacker to thwart a cryptographic module's defenses
* A cryptographic module shall be a set of hardware, software, firmware, or some combination thereof that implements cryptographic functions or processes, including cryptographic algorithms and, optionally, key generation, and is contained within a defined cryptographic boundary.
Cloud Architecture Models
SABSA
ITIL
TOGAF
NIST Cloud Technology Roadmap
SP 800-145
SP 800-146
SP 500-293
Information and Data Governance Types
Information classification – What is the high-level description of valuable information categories (such as highly confidential, regulated)?
Information management policies – What activities are allowed for different information types?
Location and jurisdictional policies – Where can data be geographically located? What are the legal and regulatory implications or ramifications?
Authorizations – Who is allowed to access different types of information?
Custodianship – Who is responsible for managing the information at the bequest of the owner?
Domain 2: Cloud Data Security
Understanding the Cloud Data Lifecycle
Create
Store
Use
Share
Archive
Destroy
Design and Implement Cloud Data Storage Architectures
Storage Types
IaaS – self-service (by customer) administration of an online environment
Volume – comparable to a Windows or Unix designation (i.e., C:\)
Object – identifies files that are given a key value (tokenization) for access; like a file share
PaaS – DevOps
Structured – data stored in a relational database; highly organized
Unstructured – multimedia files, text files, and Microsoft Office files
SaaS – (Gmail, Google Apps, Office 365, Hotmail)
Information Storage and Management – data is entered via a web interface (API) and stored in a back-end database (based on volumes and objects)
Content and File Storage – data stored within the applications
Content delivery network (CDN) – data stored as objects and distributed to geographically dispersed nodes to improve performance
Threats to Storage Types
Unauthorized usage: In the cloud, data storage can be manipulated into unauthorized usage, such as by account hijacking or uploading illegal content. The multitenancy of the cloud storage makes tracking unauthorized usage more challenging.
Unauthorized access: Unauthorized access can happen due to hacking, improper permissions in a multitenant’s environment, or an internal CSP employee.
Liability due to regulatory noncompliance: Specific controls (that is, encryption) might be required to ensure compliance with certain regulations. Not all cloud services enable all relevant data controls.
Denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks on storage: Availability is a strong concern for cloud storage. Without data, no instances can launch.
Corruption, modification, and destruction of data: This can be caused by various sources: human error, hardware or software failure, events such as fire or flood, or intentional hacks. It can also affect a certain portion of the storage or the entire array.
Data leakage and breaches: Consumers should always be aware that cloud data is exposed to data breaches. It can be external or coming from a CSP employee with storage access. Data tends to be replicated and moved in the cloud, which increases the likelihood of a leak.
Theft or accidental loss of media: This threat applies to portable storage, but as cloud data centers grow and storage devices become smaller, there are increasingly more vectors for them to experience theft or similar threats.
Malware attack or introduction: The goal of almost every malware is eventually reaching the data storage.
Improper treatment or sanitization after end of use: End of use is challenging in cloud computing because usually we cannot enforce physical destruction of media. But the dynamic nature of data, where data is kept in different storages with multiple tenants, mitigates the risk that digital remnants can be located.
Technologies Available to Address Threats
DLP Components
DLP Data States
DLP Cloud Implementations and Practices
Design and Apply Data Security Strategies
Encryption – driven by compliance requirements
Encryption of Data in Different States
DIM: IPsec and TLS (VPNs)
DAR: storage vs. archival
DIU: Information and Data Rights Management (IRM, DRM)
Challenges with Encryption
The integrity of encryption is dependent on key management and how they are secured (CSP control vs. Customer)
Encryption can be challenging to implement effectively when a CSP is required to process the encrypted data. This is true even for simple tasks such as indexing and the gathering of metadata.
Data in the cloud is highly portable. It replicated, is copied, and is backed up extensively, making encryption and key management challenging.
Multitenant cloud environments and the shared use of physical hardware present challenges for the safeguarding of keys in volatile memory such as random access memory (RAM) caches.
Secure hardware (HSM) for encrypting keys may not exist in cloud environments, with software-based key storage often being more vulnerable.
Storage-level encryption is typically less complex and can be more easily exploited and compromised, given sufficient time and resources. The higher you go up toward the application level, the more challenging the complexity to deploy and implement encryption becomes. However, encryption implemented at the application level is typically more effective at protecting the confidentiality of the relevant assets or resources.
Encryption can negatively affect performance, especially high-performance data processing mechanisms such as data warehouses and data centers.
The nature of cloud environments typically requires you to manage more keys than traditional environments (access keys, API keys, encryption keys, and shared keys, among others).
Some cloud encryption implementations require all users and service traffic to go through an encryption engine. This can result in availability and performance issues both to end users and to providers.
Throughout the data lifecycle, data can change locations, format, encryption, and encryption keys. Using the data security lifecycle can help document and map all those different aspects.
Encryption affects data availability. Encryption complicates data availability controls such as backups, disaster recovery planning (DRP), and colocations because expanding encryption into these areas increases the likelihood that keys may become compromised. In addition, if encryption is applied incorrectly within any of these areas, the data may become inaccessible when needed.
Encryption does not solve data integrity threats. Data can be encrypted and yet be subject to tampering or file replacement attacks. In this case, supplementary cryptographic controls such as digital signatures need to be applied, along with nonrepudiation for transaction-based activities.
Encryption Implementations
Basic Storage-Level Encryption – Where storage-level encryption is utilized, the encryption engine is located on the storage management level, with the keys usually held by the CSP. The engine encrypts data written to the storage and decrypts it when exiting the storage (that is, for use). This type of encryption is relevant to both object and volume storage, but it only protects from hardware theft or loss. It does not protect from CSP administrator access or any unauthorized access coming from the layers above the storage.
Volume Storage Encryption – Volume storage encryption requires that the encrypted data reside on volume storage. This is typically done through an encrypted container, which is mapped as a folder or volume. Instance-based encryption allows access to data only through the volume OS and therefore provides protection against the following:
Physical loss or theft
External administrator(s) accessing the storage
Snapshots and storage-level backups being taken and removed from the system
Volume storage encryption does not provide protection against access made through the instance or an attack that is manipulating or operating within the application running on the instance. Two methods can be used to implement volume storage encryption:
Instance-based encryption: When instance-based encryption is used, the encryption engine is located on the instance itself. Keys can be guarded locally but should be managed external to the instance.
Proxy-based encryption: When proxy-based encryption is used, the encryption engine is running on a proxy instance or appliance. The proxy instance is a secure machine that handles all cryptographic actions, including key management and storage. The proxy maps the data on the volume storage while providing access to the instances. Keys can be stored on the proxy or via the external key storage (recommended), with the proxy providing the key exchanges and required safeguarding of keys in memory.
Object Storage Encryption – The majority of object storage services offer server-side storage-level encryption, as described previously. This kind of encryption offers limited effectiveness, with the recommendation for external mechanisms to encrypt the data prior to its arrival within the cloud environments. Potential external mechanisms include the following:
File-level encryption: Examples include IRM and DRM solutions, both of which can be effective when used in conjunction with file hosting and sharing services that typically rely on object storage. The encryption engine is commonly implemented at the client side and preserves the format of the original file.
Application-level encryption: The encryption engine resides in the application that is utilizing the object storage. It can be integrated into the application component or by a proxy that is responsible for encrypting the data before going to the cloud. The proxy can be implemented on the customer gateway or as a service residing at the external provider.
Key Management – Key management is one of the most challenging components of any encryption implementation. Even though new standards such as Key Management Interoperability Protocol (KMIP) are emerging, safeguarding keys and appropriately managing those keys are still the most complicated tasks you will need to engage in when planning cloud data security. Following are some common challenges with key management:
Access to the keys: Leading practices coupled with regulatory requirements may set specific criteria for key access, along with restricting or not permitting access to keys by CSP employees or personnel.
Key storage: Secure storage for the keys is essential to safeguarding the data. In traditional in-house environments, keys were able to be stored in secure dedicated hardware. This may not always be possible in cloud environments.
Backup and replication: The nature of the cloud results in data backups and replication across a number of different formats. This can affect the ability for long- and short-term key management to be maintained and managed effectively.
Key Management Considerations
Random number generation should be conducted as a trusted process.
Throughout the lifecycle, cryptographic keys should never be transmitted in the clear; they should always remain in a trusted environment.
When considering key escrow or key management “as a service,” carefully plan to take into account all relevant laws, regulations, and jurisdictional requirements.
Lack of access to the encryption keys will result in lack of access to the data. This should be considered when discussing confidentiality threats versus availability threats.
Where possible, key management functions should be conducted separately from the CSP to enforce separation of duties and force collusion to occur if unauthorized data access is attempted.
Key Storage in the Cloud – typically implemented using one or more of the following approaches:
Internally managed: In this method, the keys are stored on the virtual machine or application component that is also acting as the encryption engine. This type of key management is typically used in storage-level encryption, internal database encryption, or backup application encryption. This approach can be helpful for mitigating against the risks associated with lost media.
Externally managed: In this method, keys are maintained separate from the encryption engine and data. They can be on the same cloud platform, internally within the organization, or on a different cloud. The actual storage can be a separate instance (hardened especially for this specific task) or on a hardware security module (HSM). When implementing external key storage, consider how the key management system is integrated with the encryption engine and how the entire lifecycle of key creation through retirement is managed.
Managed by a third party: This is when a trusted third party provides key escrow services. Key management providers use specifically developed secure infrastructure and integration services for key management. You must evaluate any third-party key storage services provider that may be contracted by the organization to ensure that the risks of allowing a third party to hold encryption keys is well understood and documented.
Masking and Obfuscation – various techniques are used to disguise data
Anonymization – relies on identifiers to protect data
Tokenization – maps a random value back to original “sensitive” data; two DBs
Application of Technologies
Emerging Technologies
Bit splitting – involves splitting up and storing encrypted information across different cloud storage services. Depending on how the bit splitting system is implemented, some of or all the data set is required to be available to unencrypt and read the data. The benefits of bit splitting follow:
Data security is enhanced due to the use of stronger confidentiality mechanisms.
Bit splitting between different geographies and jurisdictions may make it harder to gain access to the complete data set via a subpoena or other legal processes.
It can be scalable, can be incorporated into secured cloud storage API technologies, and can reduce the risk of vendor lock-in.
Homomorphic encryption – is a form of encryption that allows computation on ciphertexts, generating an encrypted result which, when decrypted, matches the result of the operations as if they had been performed on the plaintext. The purpose of homomorphic encryption is to allow computation on encrypted data.
Data Discovery and Classification Techniques
Data Discovery – emphasizes interactive, visual analytics rather than static reporting; goal is to enable users to find meaningful information in data
Data discovery is being driven by these trends:
Big data: On big data projects, data discovery is more important and more challenging. Not only is the volume of data that must be efficiently processed for discovery larger, but the diversity of sources and formats presents challenges that make many traditional methods of data discovery fail. Cases in which big data initiatives also involve rapid profiling of high-velocity big data make data profiling harder and less feasible using existing toolsets.
Real-time analytics: The ongoing shift toward (nearly) real-time analytics has created a new class of use cases for data discovery. These use cases are valuable but require data discovery tools that are faster, more automated, and more adaptive.
Agile analytics and agile business intelligence: Data scientists and business intelligence teams are adopting more agile, iterative methods of turning data into business value. They perform data discovery processes more often and in more diverse ways, for example, when profiling new data sets for integration, seeking answers to new questions emerging this week based on last week’s new analysis, or finding alerts about emerging trends that may warrant new analysis work streams.
Data discovery analysis tools:
Metadata
Labels
Content analysis
Classification – a tool for categorization of data and defining the appropriate controls; categories include:
Data type (format, structure)
Jurisdiction (of origin, domiciled) and other legal constraints
Context
Ownership
Contractual or business constraints
Trust levels and source of origin
Value, sensitivity, and criticality (to the organization or to a third party)
Obligation for retention and preservation
Monitoring
Enforcement
DLP Topologies:
Data in motion (DIM)
Data at rest (DAR)
Data in use (DIU)
Relevant Jurisdictional Data Protections for Personally Identifiable Information
Privacy and data protection (P&DP)
Data Privacy Acts
HIPAA
EU: Directive 95/46/EC – applies to all electronic and paper-based data
Asian Pacific Economic Cooperation (APEC) Privacy Framework
Privacy Roles and Responsibilities
Physical Environment – sole responsibility of the cloud provider for all models
Infrastructure – sole responsibility of the cloud provider for PaaS and SaaS; shared responsibility for IaaS between the cloud provider and customer
Platform – sole responsibility for the cloud provider for SaaS, shared responsibility for PaaS, and responsibly of the cloud customer for IaaS
Application – shared responsibility for SaaS, and so responsible for the cloud customer for both IaaS and PaaS
Data – sole responsibility for the cloud customer for all models
Governance – sole responsible for the cloud customer for all models
Common privacy terms:
Data subject: A subject who can be identified, directly or indirectly, by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural, or social identity (such as telephone number or IP address).
Personal data: Any information relating to an identified or identifiable natural person. There are many types of personal data, such as sensitive and health data and biometric data. According to the type of personal data, the P&DP laws usually set out specific privacy and data-protection obligations (such as security measures and data subject’s consent for the processing).
Processing: Operations that are performed upon personal data, whether by automatic means, such as collection, recording, organization, storage, adaptation, alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, blocking, erasure, or destruction. Processing is undertaken for specific purposes and scopes; as a result, the P&DP laws usually set out specific privacy and data-protection obligations, such as security measures and data subject’s consent for the processing.
Controller: The natural or legal person, public authority, agency, or any other body that alone or jointly with others determines the purposes and means of the processing of personal data. Where the purposes and means of processing are determined by national or community laws or regulations, the controller or the specific criteria for his nomination may be designated by national or community law.
Processor: A natural or legal person, public authority, agency, or any other body that processes personal data on behalf of the controller.
Implementation of Data Discovery – provides an operative foundation for effective application and governance for any of the P&DP fulfillments
From the customer’s perspective: The customer, in his role of data controller, has full responsibility for compliance with the P&DP laws obligations. Therefore, the implementation of data discovery solutions with data classification techniques provide a sound basis for operatively specifying to the service provider the requirements to be fulfilled and for performing effective periodic audit according to the applicable P&DP laws. They also demonstrate, to the competent privacy authorities, the customer’s due accountability according to the applicable P&DP laws.
From the service provider’s perspective: The service providers, in the role of data processor, must implement and be able to demonstrate they have implemented in a clear and objective way the rules and the security measures to be applied in the processing of personal data on behalf of the controller. Thus, data discovery solutions with data classification techniques provide an effective enabler factor for their ability to comply with the controller P&DP instructions.
Classification of Discovered Sensitive Data – plays an essential role in the operative control of those elements that are the feeds of the P&DP fulfillments
Mapping and Definition of Controls
Application of Defined Controls
Cloud Security Alliance Cloud Controls Matrix
Responsibilities Depending on the Type of Cloud Service
SaaS: The customer determines and collects the data to be processed with a cloud service, whereas the service provider essentially makes the decisions of how to carry out the processing and implement specific security controls. It is not always possible to negotiate the terms of the service between the customer and the service provider.
PaaS: The customer has higher possibility to determine the instruments of processing, although the terms of the services are not usually negotiable.
IaaS: The customer has a high level of control for data, processing functionalities, tools, and related operational management, thus achieving a high level of responsibility in determining purposes and means of processing.
Data Rights Management (aka Information Rights Management)
Objectives:
IRM adds an extra layer of access controls on top of the data object or document. The ACL determines who can open the document and what they can do with it and provides granularity that flows down to printing, copying, saving, and similar options.
Because IRM contains ACLs and is embedded into the original file, IRM is agnostic to the location of the data, unlike other preventive controls that depended on file location. IRM protection travels with the file and provides continuous protection.
IRM is useful for protecting sensitive organization content such as financial documents. However, it is not limited to documents; IRM can be implemented to protect emails, web pages, database columns, and other data objects.
IRM is useful for setting up a baseline for the default Information Protection Policy; that is, all documents created by a certain user, at a certain location, receive a specific policy.
Tools
Auditing
Expiration
Policy Control
Protection
Support for Applications
Data Retention, Deletion, and Archiving Policies
Data Retention – is an organization’s established protocol for keeping information for operational or regulatory compliance needs; are to keep important information for future use or reference, to organize information so it can be accessed later, and to dispose of information that is no longer needed
Data Deletion – safe disposal of data once it is no longer needed; failure to do so may result in data breaches or compliance failures
Overwriting
Encryption (crypto-shredding)
Degaussing
Physical destruction
Data Archiving – Elements:
Data-encryption procedures: Long-term data archiving with an encryption can present a challenge for the organization with regard to key management. The encryption policy should consider which media is used, what the restoral options are, and what threats should be mitigated by the encryption. Bad key management can lead to the destruction of the entire archive; therefore, it requires attention.
Data-monitoring procedures: Data stored in the cloud tends to be replicated and moved. To maintain data governance, it is required that all data access and movements be tracked and logged to make sure that all security controls are being applied properly throughout the data lifecycle.
Ability to perform e-discovery and granular retrieval: Archive data may be subject to retrieval according to certain parameters such as dates, subjects, and authors. The archiving platform should provide the ability to perform e-discovery on the data to determine which data should be retrieved.
Backup and DR options: All requirements for data backup and restore should be specified and clearly documented. It is important to ensure that the business continuity and disaster recovery (BCDR) plans are updated and aligned with whatever procedures are implemented.
Data format and media type: The format of the data is an important consideration because it may be kept for an extended period. Proprietary formats can change, thereby leaving data in a useless state, so choosing the right format is important. The same consideration must be made for media storage types.
Auditability, Traceability, and Accountability of Data Events
Definition of Event Sources – are monitored to provide the raw data on events on a system being monitored; attributes are used to specify the kind of data or information associated with an event that you want to capture for analysis to uncover patterns of activity that may indicate threats or vulnerabilities are present in the system that have to be addressed
IaaS Event Sources – the cloud customer has the most access into the system
Cloud or network provider perimeter network logs
Logs from DNS servers
Virtual machine manager (VMM) logs
Host OS and hypervisor logs
API access logs
Management portal logs
Packet captures
Billing records
PaaS Event Sources – the customer has access into applications on the system
Input validation failures, such as protocol violations, unacceptable encodings, and invalid parameter names and values
Output validation failures, such as database record set mismatch and invalid data encoding
Authentication successes and failures
Authorization (access control) failures
Session management failures, such as cookie session identification value modification
Application errors and system events, such as syntax and runtime errors, connectivity problems, performance issues, third-party service error messages, file system errors, file upload virus detection, and configuration changes
Application and related systems startups and shutdowns, and logging initialization (starting, stopping, or pausing)
Use of higher-risk functionality, such as network connections, addition or deletion of users, changes to privileges, assigning users to tokens, adding or deleting tokens, use of systems administrative privileges, access by application administrators, all actions by users with administrative privileges, access to payment cardholder data, use of data encrypting keys, key changes, creation and deletion of system-level objects, data import and export including screen-based reports, and submission of user-generated content, especially file uploads
SaaS Event Sources – the cloud customer has minimal access into system logs
Webserver logs
Application server logs
Database logs
Guest OS logs
Host access logs
Virtualization platform logs and SaaS portal logs
Network captures
Billing records
Identity Attribution Requirements
When
Where
Who
What
Data Event Logging
Storage and Analysis of Data Events
SIEM systems
Aggregation and Correlation
Alerting
Reporting and Compliance
Dashboards
Retention and Compliance
Chain of Custody and Non-Repudiation
Domain 3: Cloud Platform and Infrastructure Security
Cloud Infrastructure Components
Physical Environment – redundancy is the primary strategy
There is a high volume of expensive hardware; up to hundreds of thousands of servers in a single facility
Per square meter, power densities can be found up to 10kW (kilowatts)
There is an enormous and immediate impact of downtime on all dependent business
Data center owners can provide multiple levels of service; the basic level is often summarized as “power, pipe, and ping”
There is electrical power and cooling pipe; that is, air conditioning. “Power” and “pipe” limit the density with which servers can be stacked in the data center.
Power density is expressed in kW per rack, where a data center can house up to 25 racks per 100 square meters. Power densities of 100W per rack were once the norm, but these days 10kW or more per rack is seen and often required to ensure adequate supply can satisfy operational and functional requirements. These densities require advanced cooling engineering.
Network connectivity is provided to the data center networks to access storage, and external connectivity is provided to access wide area network (WAN) resources.
Data center providers (colocation) can provide floor space, rack space, and cages (lockable floor space) on any level of aggregation. The smallest unit can range from a one-unit slot in a rack to a full room.
Networking
Networking Hardware
Software Defined Networking – provides a clearly defined and separate network control plane to manage network traffic that is separated from the forwarding plane; this approach allows for network control to become directly programmable and distinct from forwarding, allowing for dynamic adjustment of traffic flows to address changing patterns of consumption
SDN enables execution of the control plane software on general-purpose hardware, allowing for the decoupling from specific network hardware configurations and allowing for the use of commodity servers.
First-level terms:
Cloud service consumer: Person or organization that maintains a business relationship with and uses service from the cloud service providers (CSPs)
CSP: Person, organization, or entity responsible for making a service available to service consumers
Cloud carrier: The intermediary that provides connectivity and transport of cloud services between the CSPs and the cloud service consumers
Computing
Reservations
Limits
Shares
Virtualization
Type 1 Hypervisors – bare metal
Type 2 Hypervisors – Host OS
Storage
Volume Storage
Object Storage – The CSP can provide a file system-like scheme to its customers. This is traditionally called object storage, where objects (files) are stored with additional metadata (content type, redundancy required, creation date, and so on). These objects are accessible through APIs and potentially through a web user interface. Instead of organizing files in a directory hierarchy, object storage systems store files in a flat organization of containers (called buckets in Amazon S3) and use unique IDs (called keys in S3) to retrieve them. Commercial examples include Amazon S3 and Rackspace cloud files. Object storage is typically the way to store OS images, which the hypervisor boots into running instances. Technically, object storage can implement redundancy as a way to improve resilience by dispersing data via fragmenting and duplicating it across multiple object storage servers. This can increase resilience and performance and may reduce data loss risks. The features you get in an object storage system are typically minimal. You can store, retrieve, copy, and delete files, as well as control which users can undertake these actions.
Management Plane – The management plane allows the administrator to remotely manage the hosts, as opposed to having to visit each server physically to turn it on or install software on it. The key functionality of the management plane is to create, start, and stop VM instances and provision them with the proper virtual resources such as CPU, memory, permanent storage, and network connectivity. When the hypervisor supports it, the management plane also controls live migration of VM instances. The management plane, thus, can manage all these resources across an entire farm of equipment.
Risks Associated with Cloud Infrastructure
Risk Assessment and Analysis
Policy and Organization Risks
Provider lock-in: This refers to the situation in which the consumer has made significant vendor-specific investments. These can include adaptation to data formats, procedures, and feature sets. These investments can lead to high costs of switching between providers.
Loss of governance: This refers to the consumer not being able to implement all required controls. This can lead to the consumer not realizing her required level of security and potential compliance risks.
Compliance risks: Consumers often have significant compliance obligations, such as when handling payment card information, health data, or other PII. A specific cloud vendor and solution may not be able to fulfill all those obligations, for example, when the location of stored data is insufficiently under control.
Provider exit: In this situation, the provider is no longer willing or capable of providing the required service. This could be triggered by bankruptcy or a need to restructure the business.
General Risks
The consolidation of IT infrastructure leads to consolidation risks, where a single point of failure can have a bigger impact.
A larger-scale platform requires the CSP to bring to bear more technical skills to manage and maintain the infrastructure.
Control over technical risks shifts toward the provider.
Virtualization Risks
Guest breakout: This occurs when there is a breakout of a guest OS so that it can access the hypervisor or other guests. This is presumably facilitated by a hypervisor flaw.
Snapshot and image security: The portability of images and snapshots makes people forget that images and snapshots can contain sensitive information and need protecting.
Sprawl: This occurs when you lose control of the amount of content on your image store.
Cloud-specific Risks
Management plane breach: Arguably, the most important risk is a management plane (management interface) breach. Malicious users, whether internal or external, can affect the entire infrastructure that the management interface controls.
Resource exhaustion: Because cloud resources are shared by definition, resource exhaustion represents a risk to customers. This can play out as being denied access to resources already provisioned or as the inability to increase resource consumption. Examples include sudden lack of CPU or network bandwidth, which can be the result of overprovisioning to tenants by the CSP.
Isolation control failure: Resource sharing across tenants typically requires the CSP to realize isolation controls. Isolation failure refers to the failure or nonexistence of these controls. Examples include one tenant’s VM instance accessing or affecting instances of another tenant, failure to limit one user’s access to the data of another user (in a software as a service [SaaS] solution), and entire IP address blocks being blacklisted as the result of one tenant’s activity.
Insecure or incomplete data deletion: Data erasure in most OSs is implemented by just removing directory entries rather than by reformatting the storage used. This places sensitive data at risk when that storage is reused due to the potential for recovery and exposure of that data.
Control conflict risk: In a shared environment, controls that lead to more security for one stakeholder (blocking traffic) may make it less secure for another (loss of visibility).
Software-related risks: Every CSP runs software, not just the SaaS providers. All software has potential vulnerabilities. From the customer’s perspective, control is transferred to the CSP, which can mean an enhanced security and risk awareness, but the ultimate accountability for compliance still falls to the customer.
Legal Risks
Data protection: Cloud customers may have legal requirements about the way that they protect data; in particular, PII. The controls and actions of the CSP may not be sufficient for the customer.
Jurisdiction: CSPs may have data storage locations in multiple jurisdictions, which can affect other risks and their controls.
Law enforcement: As a result of law enforcement or civil legal activity, it may be required to hand over data to authorities. The essential cloud characteristic of shared resources may make this process hard to do and may result in exposure risks to other tenants. For example, seizure and examination of a physical disk may expose the data of multiple customers.
Licensing: Finally, when customers want to move existing software into a cloud environment, any licensing agreements on that software might make this legally impossible or prohibitively expensive. An example could be licensing fees that are tied to the deployment of software based on a per-CPU licensing model.
Non-cloud Risks
Guest breakout
Identity compromise, either technical or social (for example, through employees of the provider)
API compromise, such as by leaking API credentials
Attacks on the provider’s infrastructure and facilities (for example, from a third-party administrator that may be hosting with the provider)
Attacks on the connecting infrastructure (cloud carrier)
Countermeasure Strategies
Continuous Uptime
Automation of Controls
Access Controls
Building access
Computer floor access
Cage or rack access
Access to physical servers (hosts)
Hypervisor access (API or management plane)
Guest OS access (VMs)
Developer access
Customer access
Database access rights
Vendor access
Remote access
Application and software access to data (SaaS)
Design and Plan Security Controls
Physical and Environmental Protection
System and Communication Protection
Data at Rest
Data in Transit
Data in Use
Security practices:
Automation of configuration
Responsibilities for protecting he cloud system
Monitoring
Virtualization Systems Protection
Management of Identification, Authentication, and Authorization
Federation
Identification
Authentication
Authorization
Auditing
Disaster Recovery and Business Continuity Management Planning
Understanding the Cloud Environment
On-Premises, Cloud as BCDR
Cloud Service Consumer, Primary Provider BCDR
Cloud Service Consumer, Alternative Provider BCDR
BCDR Planning Factors
The important assets: data and processing
The current locations of these assets
The networks between the assets and the sites of their processing
Actual and potential location of workforce and business partners in relation to the disaster event
Understanding Business Requirements
Recovery Point Objective (RPO)
Recovery Time Objective (RTO)
Recovery Service Level (RSL)
Understanding Risks
Threat Agents
Change in Location
Maintaining Redundancy
Failover Mechanism
Bringing Services Online
Functionality with External Services
Disaster Recovery/Business Continuity Strategy
Define Scope
Gather Requirements
Analyze
Assess Risk
Load Capacity at the Alternate Site
Migration of Services
Legal and Contractual Issues
Design
Implement the Plan
Test the Plan
Report and Revise
Domain 4: Cloud Application Security
Training and Awareness in Application Security
Cloud Access – APIs
Benefits:
Programmatic control and access
Automation
Integration with third-party tools
Representational State Transfer (REST): A software architecture style consisting of guidelines and best practices for creating scalable web services
Representational State Transfer
Uses simple hypertext transfer protocol (HTTP)
Supports many different data formats like JavaScript Object Notation (JSON), eXtensible Markup Language (XML), and Yet Another Multicolumn Layout (YAML)
Performance and scalability are good and uses caching
Widely used
Simple object access protocol (SOAP): A protocol specification for exchanging structured information in the implementation of web services in computer networks
Simple object access protocol
Uses SOAP envelope and then HTTP (or FTP or SMTP) to transfer the data
Only supports XML format
Slow performance, scalability can be complex, caching is not possible
Used where REST is not possible, provides WS-* features
Cloud Development Basics
Common Pitfalls
Portability Issues
Cloud Appropriateness
Integration Challenges
Cloud Environment Challenges
Cloud Development Challenges
Common Vulnerabilities – OWASP top 10 covers the following categories:
A1 – Injection: Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization.
A2 – Broken Authentication and Session Management: Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities.
A3 – Cross-Site Scripting (XSS): XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation or escaping. XSS allows attackers to execute scripts in the victim’s browser, which can hijack user sessions, deface websites, or redirect the user to malicious sites.
A4 – Insecure Direct Object References: A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data.
A5 – Security Misconfiguration: Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. Secure settings should be defined, implemented, and maintained, as defaults are often insecure. Additionally, software should be kept up to date.
A6 – Sensitive Data Exposure: Many web applications do not properly protect sensitive data, such as credit cards, tax IDs, and authentication credentials. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data deserves extra protection such as encryption at rest or in transit, as well as special precautions when exchanged with the browser.
A7 – Missing Function Level Access Control: Most web applications verify function-level access rights before making that functionality visible in the UI. However, applications need to perform the same access control checks on the server when each function is accessed. If requests are not verified, attackers will be able to forge requests to access functionality without proper authorization.
A8 – Cross-Site Request Forgery (CSRF): A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim.
A9 – Using Components with Known Vulnerabilities: Components, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defenses and enable a range of possible attacks and impacts.
A10 – Unvalidated Redirects and Forwards: Web applications frequently redirect and forward users to other pages and websites; they use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages.
Cloud Software Assurance and Validation
Cloud-based Functional Testing
Cloud Secure Development Lifecycle
Security Testing
Dynamic Application Security Testing (DAST)
Considered a black-box test, to discover individual execution paths in the application being analyzed
Unlike SAST, which analyzes code offline (when the code is not running), DAST is used against applications in their running state.
DAST is mainly considered effective when testing exposed HTTP and HTML interfaces of web applications.
Pen Testing
Runtime Application Self-Protection
Static Application Security Testing (SAST)
Considered a white-box test, where the application test performs an analysis of the application source code, byte code, and binaries without executing the application code
SAST is used to determine coding errors and omissions that are indicative of security vulnerabilities.
SAST is often used as a test method while the tool is under development (early in the development lifecycle)
SAST can be used to find XSS errors, SQL injection, buffer overflows, unhandled error conditions, and potential backdoors
Typically delivers more comprehensive results than using DAST
Vulnerability Scanning
Understanding the Software Development Lifecycle (SDLC) Process
Phases and Methodologies
Requirement Gathering and Feasibility
Requirement Analysis
Design
Development Coding
Testing
Maintenance during Lifecycle
Business Requirements
Software Configuration Management and Versioning
Applying the Secure Software Development Lifecycle
Cloud-Specific Risks
Data Breaches
Insufficient Identity, Credential, and Access Management
Unsecured Interfaces and APIs
System Vulnerabilities
Account Hijacking
Malicious Insiders
Advanced Persistent Threats
Data Loss
Insufficient Due Diligence
Abusive Use of Cloud Services
Denial of Service
Shared Technology Issues
Quality of Service
Threat Modeling
STRIDE: spoofing identity, tampering with data, repudiation, information disclosure, denial service, and elevation of privileges
DREAD: damage potential, reproducibility, exploitability, affected users, discoverability
ISO/IEC 27034-1
“Information Technology – Security Techniques – Application Security”
Defines concepts, frameworks, and processes to help organizations integrate security within their software development lifecycle
Organizational Normative Framework (ONF)
All components of application security best practices
The containers include the following:
Business context: Includes all application security policies, standards, and best practices adopted by the organization
Regulatory context: Includes all standards, laws, and regulations that affect application security
Technical context: Includes required and available technologies that are applicable to application security
Specifications: Documents the organization’s IT functional requirements and the solutions that are appropriate to address these requirements
Roles, responsibilities, and qualifications: Documents the actors within an organization who are related to IT applications
Processes: Relates to application security
Application security control library: Contains the approved controls that are required to protect an application based on the identified threats, the context, and the targeted level of trust
Application Normative Framework (ANF)
Used in conjunction with the ONF
Created for a specific application
Maintains the applicable portions of the ONF that are needed to enable a specific application to achieve a required level of security or the targeted level of trust
The ONF to ANF is a one-to-many relationship, where one ONF is used as the basis to create multiple ANFs
Application Security Management Process
ISO/IEC 27034-1 defines an application security management process (ASMP) to manage and maintain each ANF
The ASMP is created in five steps:
Specifying the application requirements and environment
Assessing application security risks
Creating and maintaining the ANF
Provisioning and operating the application
Auditing the security of the application
Cloud Application Architecture
Supplemental Security Devices
Firewalls
Web Application Firewalls
XML Appliances
Cryptography
Sandboxing
Application Virtualization
Identity and Access Management (IAM) Solutions
Federated Identity
SAML
OAuth
OpenID
WS-Federation
Identity Providers
Single Sign-On
Multifactor Authentication
Something the User Knows
Something the User Possesses
Something the User Is
Domain 6: Legal and Compliance Domain
Legal Requirements and Unique Risks within the Cloud Environment
International Legislation Conflicts
Appraisal of Legal Risks Specific to Cloud Computing
Legal Controls
eDiscovery
Legal Issues
Conducting eDiscovery in the Cloud
eDiscovery against the Cloud Provider
ISO/IEC 27050
CSA Guidance
Forensic Requirements
Privacy Issues and Jurisdictional Variation
Difference between Contractual and Regulated PII
Contractual PII – affected by the contract between CSP and customer
Regulated PII – affected by whatever laws and regulations apply
Country-Specific Legislation Related to PII and Data Privacy
USA
Gramm-Leach-Bliley Act (Financial Modernization Act of 1999)
HIPAA (1996)
Sarbanes-Oxley Act (SOX)
European Union (EU)
Directive 95/46 EC (1995)
General Data Protection Regulation (2016)
Differences among Confidentiality, Integrity, Availability, and Privacy
Confidentiality
Integrity
Availability
Privacy
Audit Processes, Methodologies, and Required Adaptions for a Cloud Environment
Internal and External Audit Controls
Impact of Requirements Programs Using Cloud Services
Assurance Challenges of Virtualization and Cloud
Types of Audit Reports
SAS 70 (defunct)
SSAE:
SOC 1
SOC 2
SOC 3
ISAE
Restrictions of Audit Scope Statements
Gap Analysis – requires management support
Audit Plan
Define Objectives
Define Scope
Audit Steps and Procedures
Change Management
Communications
Criteria and Metrics
Physical Access and Location
Previous Audits
Remediation
Reporting
Scale and Inclusion
Timing
Conduct the Audit
Lessons Learned and Analysis
Audit Duplication and Overlap
Data Collection Processes
Report Evaluation
Scope and Limitations Analysis
Staff and Expertise
Standards Requirements
ISO/IEC 27018 – addresses the privacy aspects of cloud computing for
Consumers; is the first international set of privacy controls in the cloud
Communication
Consent
Control
Transparency
Independent and Yearly Audit
Generally Accepted Privacy Principles (GAPP)
Management
Notice
Choice and Consent
Collection
Use, Retention, and Disposal
Access
Disclosure to Third Parties
Security for Privacy
Quality
Monitoring and Enforcement
Internal Information Security Management System (ISMS)
Internal Information Security Controls System (27001:2013 domains)
Information Security Policies
Organization of Information Security
Human Resources Security
Asset Management
Access Control
Cryptographic
Physical and Environmental Security
Operations Security
Communications Security
System Acquisition, Development, and Maintenance
Supplier Relationship
Information Security Incident Management
Information Security Business Continuity Management
Compliance
Policies
Identification and Involvement of Relevant Stakeholders
Specialized Compliance Requirements for Highly Regulated Industries
Impact of Distributed IT Model
Implications of Cloud to Enterprise Risk Management
Assess Providers Risk Management
Difference Between Data Owner/Controller vs. Data Custodian/Processor
Data Owner
Data Custodian
Risk Mitigation
Different Risk Frameworks
European Network and Information Security Agency (ENISA)
Top 8 security risks
ISO/IEC 31000:2009
National Institutes of Standards and Technology (NIST)
Metrics for Risk Management
Assessment of the Risk Environment
Outsourcing and Cloud Contract Design
Business Requirements
Vendor Management
Contract Management
Access to Systems
Backup and Disaster Recovery
Data Retention and Disposal
Definitions
Incident Response
Litigation
Metrics
Performance Requirements
Regulatory and Legal Requirements
Security Requirements
Termination
Executive Vendor Management
Supply-Change Management
Domain “X”:
Privacy Level Agreement
In this context, the CSA has defined baselines for compliance with data protection legislation and leading practices with the realization of a standard format named by the Privacy Level Agreement (PLA). By means of the PLA, the service provider declares the level of personal data protection and security that it sustains for the relevant data processing.
The PLA, as defined by the CSA, does the following:
Provides a clear and effective way to communicate the level of personal data protection offered by a service provider
Works as a tool to assess the level of a service provider’s compliance with data protection legislative requirements and leading practices
Provides a way to offer contractual protection against possible financial damages due to lack of compliance
Data Owner/Controller and the Data Custodian/Processor
The data subject is an individual who is the focus of personal data.
The data controller is a person who either alone or jointly with other persons determines the purposes for which and the manner in which any personal data is processed.
The data processor in relation to personal data is any person other than an employee of the data controller who processes the data on behalf of the data controller.
Data stewards are commonly responsible for data content, context, and associated business rules.
Data custodians are responsible for the safe custody, transport, data storage, and implementation of business rules.
Data owners hold the legal rights and complete control over a single piece or set of data elements. Data owners also possess the ability to define distribution and associated policies.
International Laws
The Asia-Pacific Economic Cooperation Privacy (APEC) Framework provides a regional standard to address privacy as an international issue
The EU Directive 95/46/EC provides for the regulation of the protection of personal data within the European Union; designed to protect the privacy and protection of all personal data collected for or about citizens of the European Union
The European Commission unveiled a draft EU General Data Protection Regulation to supersede the Data Protection Directive
United States
There is no single federal law governing data protection
HIPAA sets out the requirements of the Department of Health and Human Services to adopt national standards for electronic healthcare transactions (PHI) – Protected Health Information
GLBA (aka the Financial Modernization Act of 1999) is a federal law to control the ways that financial institutions deal with investors’ PII
SOX is U.S. legislation enacted to protect shareholders and the general public from accounting errors and fraudulent practices in the enterprise.
Australia, New Zealand and Japan closely follow the EU privacy laws.
Regulations in Australia and New Zealand make it extremely difficult for enterprises to move sensitive information to CSPs that store data outside of the Australian and New Zealand borders.
Data Discovery Approaches
The continuing evolution of data discovery in the enterprise and the cloud is being driven by these trends:
Big data: the volume of data that must be efficiently processed for discovery is larger, and the diversity of sources and formats presents challenges that make many traditional methods of data discovery fail
Real-time analytics: has created a new class of use cases for data discovery
Agile analytics and agile business intelligence
Data Discovery Techniques
Metadata: This is data that describes data
Labels: marked with a tag that describes the data
Content analysis: the data is analyzed by employing pattern matching, hashing, statistical, or other forms of probability analysis. A common method is to perform a Luhn check on the number itself. This is a simple numeric checksum used by credit card companies to verify if a number is valid.
The GOOD stuff:
ISO Standards
ISO/IEC 27001 formally specifies an Information Security Management System (ISMS), a suite of activities concerning the management of information risks (called ‘information security risks’ in the standard). The ISMS is an overarching management framework through which the organization identifies, analyzes and addresses its information risks. The ISMS ensures that the security arrangements are fine-tuned to keep pace with changes to the security threats, vulnerabilities and business impacts - an important aspect in such a dynamic field, and a key advantage of ISO27k’s flexible risk-driven approach as compared to, say, PCI-DSS.
ISO/IEC 27021:2017 – this standard lays out the competence requirements for ISMS professionals. The standard concerns the competences (meaning the combination of knowledge and skills) required or expected of professionals managing an ISMS.
ISO/IEC 27002 is a code of practice that recommends information security controls addressing information security control objectives arising from risks to the confidentiality, integrity and availability of information.
ISO/IEC 27017 – gives guidelines for information security controls applicable to the provision and use of cloud services. It provides additional implementation guidance for relevant controls specified in ISO/IEC 27002; also, additional controls with implementation guidance that specifically relate to cloud services.
ISO/IEC 27018 – establishes commonly accepted control objectives, controls and guidelines for implementing measures to protect Personally Identifiable Information (PII) in accordance with the privacy principles in ISO/IEC 29100 for the public cloud computing environment. It specifies guidelines based on ISO/IEC 27002, taking into consideration the regulatory requirements for the protection of PII which might be applicable within the context of the information security risk environment(s) of a provider of public cloud services.
ISO/IEC 27034-5:2017 – outlines and explains the minimal set of essential attributes of Application Security Controls (ASCs) and details the activities and roles of the Application Security Life Cycle Reference Model.
ISO/IEC 27050 – provides an overview of electronic discovery. In addition, it defines related terms and describes the concepts, including, but not limited to, identification, preservation, collection, processing, review, analysis, and production of Electronically Stored Information (ESI).
ISO/IEC 24773: – certification of software and systems engineering professionals.
ISO 31000:2018 – provides principles, framework and a process for managing risk. It can be used by any organization regardless of its size, activity or sector.
NIST SP 800-145 – Definitions of Cloud Computing
NIST SP 800-146 – Cloud Computing Synopsis and Recommendations
NIST SP 500-293 – USFG Cloud Computing Technology Roadmap (Volumes I-III)
Roles and responsibilities for data protection and management; the customer owns the data and is therefore ultimately responsible for its protection.
CISSP topics: C-I-A, Least privilege, Separation of duties, SIEM, DNS.
3 drag and drops on laws in different countries and definitions.
Cloud features and benefits
Cloud architectures
Representational State Transfer (REST): A software architecture style consisting of guidelines and best practices for creating scalable web services (page 157-8)
SDLC
Which cloud service is suited for what applications
Deployment questions
SAML and federated IdM
Cryptoshredding
Encryption that is best suited for a given scenario
Key management
Masking, Obfuscation, Anonymization, Tokenization
BC/DRP
Jurisdiction issues
Data lifecycle phases (CSUSAD)
Bit splitting and homomorphic encryption
Privacy – all topics
SLA (terms and elements) and PLA
DRM and IRM
All of Domain 1
APIs
Cloud design by the cloud architect
Hardware configuration issues
OSI layers (layer 3)
Switches, VLANs, and port mirroring
Plan-Do-Check-Act Model