No products in the cart.
ISC2 CCSP Exam Questions
Page 5 of 50
81.
Which of the following involves visualizing how data is used to inform access controls and compliance efforts?
-
Data flow diagram
-
Data dispersion
-
Data mapping
-
Data labeling
Correct answer: Data flow diagram
"Visualizing" is the keyword in the question. A data flow diagram (DFD) visualizes data flows by mapping them between an organization’s various locations and applications. This helps to maintain data visibility and implement effective access controls and regulatory compliance.
Data dispersion is when data is distributed across multiple locations to improve resiliency. Overlapping coverage makes it possible to reconstruct data if a portion of it is lost.
Data mapping identifies data requiring protection within an organization. This helps ensure that the data is properly protected wherever it is used.
Data labeling contains metadata describing important features of the data. For example, data labels could include information about ownership, classification, limitations on use or distribution, and when the data was created and should be disposed of.
82.
Under the Federal Information Security Management Act (FISMA), all U.S. government agencies are required to conduct risk assessments that align with which framework?
-
National Institute of Standards and Technology (NIST) Risk Management Framework (RMF)
-
International Standards Organization/ International Electrotechnical Commission (ISO/IEC)ISO 31000
-
National Institute of Standards and Technology (NIST) Cyber Security Framework (CSF)
-
Federal Risk and Authorization Management Program (FedRAMP)
Correct answer: National Institute of Standards and Technology (NIST) Risk Management Framework (RMF)
The NIST Risk Management Framework acts as a guide for risk management practices used by United States federal agencies.
NIST developed the NIST CSF to assist commercial enterprises in developing and executing security strategies.
FedRAMP is a cloud-specific version of NIST 800-53 that contains policies and procedures to assist cloud service providers in adopting security controls and risk assessment.
ISO/IEC 31000 is “Risk Management - Guidelines” to be used during the risk management process.
83.
A small bank has recently experienced a data breach. You have been working with the Incident Response team. They have discovered that the bad actor was able to copy a database out after having been able to do a man-in-the-middle (MitM) attack against the Diffie Hellman exchange that occurred on a user's connection.
Which of the OWASP Top 10 security threats has been experienced by this company?
-
Cryptographic Failures
-
Broken access control
-
Identification and Authentication Failures
-
Software and Data Integrity Failures
Correct answer: Cryptographic Failures
When creating and managing a web application, it's vital to keep sensitive user information private. Many web applications use data such as credit card information, authentication data, and other personally identifiable information. The OWASP Top 10 addresses the top threats that we have on this planet. Cryptographic failures occur in a few different ways. In this question, it is the failure to protect the DIffie Hellman (DH) key exchange. It is susceptible to MitM if RSA or something else like that is not added to it.
Cryptographic failures was called Sensitive data exposure on the 2017 OWASP list.
Identification and Authentication Failures was called Broken Authentication on the OWASP 2017 list. The bad actor gained access to the user's account, but it is the MitM against DH that is the problem here.
Software and Data Integrity Failures was called Insecure Deserialization on the OWASP 2017 list. An example is a browser or application using untrusted plugins that then allows compromise of the integrity of the data.
Broken access control is not top of the OWASP top 10 list. Broken access control occurs in a variety of ways, such as failing to setup access based on the logic of least privilege, or if elevation of permissions is possible for the average user when it should not be.
84.
A cloud architect is designing a Disaster Recovery (DR) solution for the bank that they work at. For their most critical server, they have determined that it can only be offline at any point in time for no more than 10 minutes, and they cannot lose more than 2 seconds worth of data.
When choosing if they should fail within their current cloud provider to another region or to another cloud provider, they need to base that decision mainly on which of the following?
-
The Maximum Tolerable Downtime (MTD)
-
The Recovery Point Objective (RPO)
-
The Recovery Service Level (RSL)
-
The Recovery Time Objective (RTO)
Correct answer: The Maximum Tolerable Downtime (MTD)
The MTD is the amount of time that a server can be offline. There are a variety of different considerations between the two options, such as 1) Will it be possible to fail to another region? Or will it also be offline? and 2) How long will it take to fail to the other provider? There are other considerations for being able to choose the best option, such as a cost/benefit analysis, but that is not an option within this question.
The RPO is how much data that they can lose. In this question, it is two seconds. Since the question is not asking which data backup service they have to choose from, that is not the right answer.
RSL is the percentage of functionality that must be in the DR alternative. That would be a consideration, but the question does not go far enough to indicate that we are talking about the RSL.
RTO is the time that the administrators would have to do the work of switching services to the other region or the other provider and takes less time than the MTD, but this is not part of what is discussed in the question.
85.
Alia is working with the software developers through the devops process as an information security manager. She is focused on threat modeling for a specific software project. When should threat modeling be performed?
-
Throughout the whole of the software development lifecycle
-
Early in the software development lifecycle, the requirements phase
-
After the requirements are understood and before development
-
During the development phase as the code is being created
Correct answer: Throughout the whole of the software development lifecycle
Threat modeling should be performed throughout the whole of the lifecycle. It is critical to always be assessing what could happen and how to prevent those attacks, breaches, failures, etc.
The steps are as follows:
- Define security requirements
- Create an application overview
- Identify threats
- Mitigate threats
- Validate threat mitigation
Good information about threat modeling can be found at OWASP's website.
86.
A public cloud provider has been building data centers around the world. They now have a data center on most of the continents. They have not built any in Antarctica yet. What has driven the cloud provider to build so many data centers?
-
Improved performance and scalability
-
Reduce resilience and redundancy
-
Easier communication through company
-
Centralized control of Information Technology
Correct answer: Improved performance and scalability
With a distributed IT model, there are many benefits. As a cloud provider, it is more efficient to have data centers closer to where the users are. Microsoft has even experimented with a data center in a tube off the northern coast of Scotland. The location was not the key; it was an experiment to see if it would work. More people on the planet live closer to an ocean than not.
By building many data centers, resilience and redundancy is actually improved, not reduced.
Easier communication through the company may be something that they want, but a distributed IT model does not have that as a goal or a benefit. That is fundamentally a different topic.
By building so many different data centers, or a distributed IT environment, control of Information Technology (IT) can be localized. The laws of the country the data center is in can be managed by the people in that country. The goal is not to centralize control of IT.
87.
Deploying redundant and resilient systems such as load balancers is MOST relevant to an organization's efforts in which of the following areas?
-
Availability Management
-
Problem Management
-
Service Level Management
-
Capacity Management
Correct answer: Availability Management
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
- Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
- Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
- Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
- Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
- Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
- Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
- Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
- Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
- Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
- Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
- Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
- Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
88.
An organization is in the middle of creating a new cloud-based application that will use Application Programming Interfaces (API) to communicate with their partner companies. Due to the design of the application, they need to use multiple data formats, including both JavaScript Object Notation (JSON) and eXtensible Markup Language ( XML), in their cloud deployment.
Which API type should they use?
-
Representational State Transfer (REST)
-
SOAP (formerly Simple Object Access Protocol)
-
Remote Procedure Call (RPC)
-
JavaScript Object Notation- Remote Procedure Call (JSON-RPC)
Correct answer: Representational State Transfer (REST)
REpresentational State Transfer (REST) is a software architectural scheme that supports multiple data types, including both JSON and XML.
SOAP supports only the use of XML-formatted data types, so it would not work for the organization.
RPC can be considered an API. It is for commands where REST makes Create, Read, Update, Delete (CRUD) available to your application.
JSON-RPC uses just JSON, not XML, which is what the question is asking for.
The website for Smashing Magazine had a good write-up about some API implementations since there is very little in the ISC2 books.
89.
Bina works for a retail corporation. She has been working with the Information Technology (IT) department to ensure that she brings her security knowledge to their daily operations. A recent vulnerability scan was performed on the Infrastructure as a Service (IaaS) cloud environment. They discovered that there were a number of different types of servers that were in need of a patch.
As they download the available patches from the vendors, they should be checking which of the following?
-
Hash value encrypted with the vendor's private key to create a digital signature
-
Hash value encrypted with the vendor’s public key to create a digital signature
-
Hash value encrypted with the vendor's symmetric and private key to create a digital signature
-
Hash value encrypted with the vendor’s symmetric key to create a digital signature
Correct answer: Hash value encrypted with the vendor's private key to create a digital signature
It's very important to ensure that security patches that are downloaded are actually from the vendor and have not been modified by an attacker. In many cases, vendors will provide a hash value that can be used to check and validate the download of the patch file. When these hash values are available, they should be used to validate and ensure that the patch file matches what the vendor has provided. The hash should be signed by the vendor with their private key. It is validated with the vendor's matched public key.
Symmetric keys are not used to create or validate digital signatures although they could be used for transmission from the vendor to the customer.
90.
During which phase of the SDLC should test cases be executed for the identified requirements?
-
Testing
-
Requirements
-
Development
-
Operations and Maintenance
Correct answer: Design
The software development lifecycle (SDLC) describes the main phases of software development from initial planning to end-of-life. While definitions differ, one commonly used description includes these phases:
- Requirements: During the requirements phase, the team identifies the software's role and the applicable requirements. This includes business, functional, and security requirements.
- Design: During this phase, the team creates a plan for the software that fulfills the previously identified requirements. Often, this is an iterative process as the design moves from high-level plans to specific ones. Also, the team may develop test cases during this phase to verify the software against requirements.
- Development: This phase is when the software is written. It includes everything up to the actual build of the software, and unit testing should be performed regularly through the development phase to verify that individual components meet requirements.
- Testing: After the software has been built, it undergoes more extensive testing. This should verify the software against all test cases and ensure that they map back to and fulfill all of the software’s requirements.
- Deployment: During the deployment phase, the software moves from development to release. During this phase, the default configurations of the software are defined and reviewed to ensure that they are secure and hardened against potential attacks.
- Operations and Maintenance (O&M): The O&M phase covers the software from release to end-of-life. During O&M, the software should undergo regular monitoring, testing, etc. to ensure it remains secure and fit for purpose.
91.
Your organization wants to address baseline monitoring and compliance by restricting the duration of a host's non-compliant condition. When the application is deployed again, the organization would like to decommission the old host and replace it with a new Virtual Machine (VM) constructed from the standard baseline image.
What functionality is described here?
-
Immutable architecture
-
Virtual architecture
-
Blockchain
-
Infrastructure as Code (IaC)
Correct answer: Immutable architecture
Immutable means unchanging over time or unable to be changed. Immutability of cloud infrastructure is a preferred state. It is feasible to easily decommission all virtual infrastructure components utilized by an older version of software and deploy a new virtual infrastructure in cloud settings. Immutable infrastructure is a solution to the problem of systems deviating from baseline settings over time. This is using a golden image to start virtual machines.
IaC is a virtual environment. The infrastructure is no longer physical routers, switches, and servers; it is now virtual routers, switches, and servers. That could also be called a virtual architecture, although IaC is the common language today.
Blockchain technology has an immutable element. It is, or should be, impossible to alter the record of who it belonged to, such as what we have with cryptocurrency. The FBI has been able to recover stolen bitcoins and return them to the rightful owner.
92.
Pabla has been working with their corporation to understand the impact that particular threats can have on their Infrastructure as a Service (IaaS) implementation. The information gathered through this process will be used to determine the correct solutions and procedures that will be built to ensure survival through many different incidents and disasters. To perform a quantitative assessment, they must determine their Single Loss Expectancy (SLE) for the corporation's Structured Query Language (SQL) database in the event that the data is encrypted through the use of ransomware.
Which of the following is the BEST definition of SLE?
-
SLE is the value of the event given a certain percentage loss of the asset
-
SLE is the value of the cost of the event multiplied times the asset value
-
SLE is the value of the event given the value of the asset and the time it can be down
-
SLE is the value of the asset given the amount of time it will be offline in a given year
Correct answer: SLE is the value of the event given a certain percentage loss of the asset
SLE is calculated by taking the asset value times the exposure factor. Exposure factor is effectively the percentage of loss of the asset.
The Annual Rate of Occurrence (ARO) is the number of times that event is expected within a given year.
The SLE multiplied times the ARO gives the value of the annualized loss expectancy.
SLE is the value of the cost of the event multiplied times the asset value is an incorrect answer because it is the loss of the asset times the asset value.
SLE is the value of the event given the value of the asset and the time it can be down is an incorrect answer because the time it can be offline is not a factor. That would be the Maximum Tolerable Downtime (MTD).
SLE is the value of the asset given the amount of time it will be offline in a given year is an incorrect answer because it is not the amount of time it can be offline in a given year. That is typically represented by nines (e.g., 99.99999% downtime).
93.
Many cloud customers have legal requirements to protect data that they place on the cloud provider's servers. There are some legal responsibilities for the cloud provider to protect that data. Therefore, it is normal for the cloud provider to have their data centers audited using which of the following?
-
External auditor
-
Internal auditor
-
Cloud architect
-
Cloud operators
Correct answer: External auditor
An external auditor is not employed by the company being audited. An external auditor will often use industry standards such as ISO 27001 and SOC2 to perform an audit of a cloud provider. Due to the legal requirements, this work needs to be done by an independent party. Therefore, internal auditors are not the correct answer here.
Cloud architects design cloud structures, and cloud operators do the daily maintenance and monitoring of the cloud, according to the Cloud Security Alliance (CSA).
94.
Which of the following SaaS risks is MOST related to how SaaS offerings are made available to customers?
-
Web Application Security
-
Virtualization
-
Proprietary Formats
-
Persistent Backdoors
Correct answer: Web Application Security
A Software as a Service (SaaS) environment has all of the risks that IaaS and PaaS environments have, as well as new risks of its own. Some risks unique to SaaS include:
- Proprietary Formats: With SaaS, a customer is using a vendor-provided solution. This may use proprietary formats that are incompatible with other software or create a risk of vendor lock-in if the organization’s systems are built around these formats.
- Virtualization: SaaS uses even more virtualized environments than PaaS, increasing the potential for VM escapes, information bleed, and similar threats.
- Web Application Security: Most SaaS offerings are web applications with a provided application programming interface (API). Both web apps and APIs have potential vulnerabilities and security risks that could exist in these solutions.
95.
Rosario, a systems engineer, is tasked with configuring an EHR system that their organization will use directly.
What industry does Rosario MOST LIKELY work in?
-
Healthcare
-
Consumer electronics
-
Finance
-
Federal government
Correct answer: Healthcare
The Health Information Technology for Economic and Clinical Health (HITECH) is the US legislation that gives healthcare organizations incentives to use electronic health records (EHRs). HITECH also included updates to the Health Information Portability and Accountability Act (HIPAA).
There is no specific information in the question to suggest Rosario works in consumer electronics, finance, or federal government.
96.
Jax is a cloud security analyst working for a large manufacturing company. An Indication of Compromise (IoC) has been discovered by their Security Information and Event Manager. In analysing the IoC, Jax discovered that there is an issue that needs to be addressed. One of the things that Jax needs to identify is the severity of the flaw or weakness that is behind the IoC.
What could she use to do that?
-
Common Vulnerability Scoring System
-
Common Weakness Enumeration
-
Common Vulnerabilities and Exposures
-
National Vulnerability Database
Correct answer: Common Vulnerability Scoring System
The Common Vulnerability Scoring System (CVSS) is a standardized framework used to assess and communicate the severity of security vulnerabilities in computer systems and software. The purpose of CVSS is to provide a consistent and objective way to evaluate the potential impact and exploitability of vulnerabilities, enabling organizations to prioritize their response and allocate resources effectively.
The National Vulnerability Database (NVD) is a comprehensive repository of information about known vulnerabilities and security issues in software and hardware products. It is maintained by the National Institute of Standards and Technology (NIST) in the United States and serves as a central resource for vulnerability management, risk assessment, and cybersecurity research.
Common Weakness Enumeration (CWE) is a community-developed list of common software weaknesses and vulnerabilities. It provides a standardized language and taxonomy for describing and categorizing software security weaknesses that can be found in various stages of the software development lifecycle. CWE is maintained by MITRE Corporate (CWE.MITREdotorg).
Common Vulnerabilities and Exposures (CVE) is a community-driven dictionary of publicly known information security vulnerabilities and exposures. It provides a standardized naming scheme and unique identifiers for known vulnerabilities, making it easier for organizations and security professionals to track and manage security risks.
97.
You see a value like XXXX XXXX XXXX 1234 in the credit card column of a database table. Which of the following data security techniques was used?
-
Masking
-
Obfuscation
-
Hashing
-
Anonymization
Correct answer: Masking
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:
- Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
- Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
- Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
- Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
- Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
Obfuscation is a more generic term for the removal or replacement of sensitive data. For example, substitution and shuffling are examples of obfuscation. In some cases, such as the (ISC)2 CCSP Certified Cloud Security Professional Official Study Guide, masking is described as a type of obfuscation. Therefore, masking is the more specific answer.
98.
Bai is working on moving the company's critical infrastructure to a public cloud provider. Knowing that she has to ensure that the company is in compliance with the requirements of the European Union's (EU) General Data Protection Regulation (GDPR) country specific laws since the cloud provider is the data processor, at what point should she begin discussions with the cloud provider about this specific protection?
-
Data Processing Agreement (DPA) negotiation
-
Establishment of Service Level Agreements (SLA)
-
Configuration of the Platform as a Service (PaaS) windows servers
-
At the moment of reversing their cloud status
Correct answer: Data Processing Agreement (DPA) negotiation
Under the EU's GDPR requirements for each country, there is a requirement for a cloud customer to inform the cloud provider that they will be storing personal data (a.k.a. Personally Identifiable Information—PII) on their servers. This is stated in the DPA, which is more generically called a Privacy Level Agreement (PLA). The cloud provider is a processor because they will be storing or holding the data. It is not necessary for the provider to ever use that data to be considered a processor. So, the first point for discussion with the cloud provider regarding the four answer options listed is the DPA negotiation.
The SLAs are part of contract negotiation, but the DPA is specific to the storage of personal data in the cloud, which is the topic of the question. The configuration of the servers and the removal of data from the cloud provider's environment (reversibility) would involve concerns about personal data. The DPA negotiation is a better answer because the question asks at what point should Bai "begin discussions" with the cloud provider.
99.
Frederick works for a medium-sized company as the Chief Information Security Officer (CISO). They use a public Cloud Service Provider (CSP) for their Information Technology (IT) environment. They have built a large Infrastructure as a Service (IaaS) environment as a virtual Data Center (vDC). They did their due diligence and carefully constructed a contract with the CSP. They were able to determine who is responsible for Security Governance, Risk, and Compliance.
Who would that be?
-
Cloud service customer
-
Cloud service provider
-
Cloud service broker
-
Both the customer and the provider
Correct answer: Cloud service customer
In all cloud service types (IaaS, PaaS, SaaS), the roles and responsibility of Security Governance, Risk, and Compliance fall solely to the cloud service customer and not the CSP. Check the references listed at the bottom. (If you do not have either book, see the (ISC)2 website regarding responsibility and accountability in the cloud.) The CSP does have to do their own Governance, Risk, and Compliance work, but that is not the question. The exam will look from a customer's perspective when looking at their public cloud provider unless stated differently in the question (some questions will be from the provider's perspective).
A Cloud Service Broker (CSB) is a third-party intermediary that facilitates the interaction between CSPs and cloud service consumers (organizations or individuals). The role of a cloud service broker is to add value to the cloud computing ecosystem by providing various services that help organizations effectively use and manage cloud services. They are not responsible for the Governance of the cloud service customer. They may assist at some point, but they are not accountable nor responsible.
100.
Damien is working for a real estate company that is working on their plans to move to an online document service that would allow their customers to sign contracts no matter what computer platform they have in their possession. So, interoperability is a critical aspect that they are concerned with. What best describes interoperability?
-
The ability for two or more systems to exchange information and mutually use that information
-
The ability for two customers to share the same pool of resources while being isolated from each other
-
The ability of customers to make changes to their cloud infrastructure with minimal input from the cloud provider
-
The ease with which resources can be rapidly expanded as needed by a cloud customer
Correct answer: The ability for two or more systems to exchange information and mutually use that information
Interoperability is defined in ISO/IEC 17788 as the ability for two or more systems to exchange information and mutually use that information. As a simple example, a Windows machine and a Mac that can exchange a Word document, where both can use it.
The ability for two customers to share the same pool of resources while being isolated from each other is known as multitenancy.
The ability of customers to make changes to their cloud infrastructure with minimal input from the cloud provider is known as on-demand self-service.
The ease with which resources can be rapidly expanded as needed by a cloud customer is called rapid elasticity.