ISC2 CCSP Exam Questions

Page 4 of 50

61.

An engineer is adding validation processes to an application that will check that session tokens are being submitted by the valid and original obtainer of the token. What OWASP Top 10 vulnerability is this engineer mitigating by doing so?

  • Identification and authentication failures

  • Insecure design

  • Vulnerable and outdated components

  • Injection

Correct answer: Identification and authentication failures

The OWASP Top 10 is an up-to-date list of the most critical web application vulnerabilities and risks. Identification and authentication failures, formerly known as broken authentication, refers to the ability for an attacker to hijack a session token and use it to gain unauthorized access to an application. This risk can be mitigated by adding proper validation processes to ensure that session tokens are being submitted by the valid and original obtainer of the token.

Insecure design is exactly what it says. We need to move left and build security into our software. This includes performing threat modeling and using reference architectures.

Vulnerable and outdated components speak to our current problem of using objects, functions, libraries, APIs, and such from Git, GitHub, GitLab, and so on. Much of this code is abandoned or not kept up to date.

Injection includes SQL and command injection. This is when a bad actor types in inappropriate commands from the user's interface. Input validation would help to minimize this further. It had been in position one on the Top 10 for well over a decade and has finally fallen to position three on the list.

62.

An organization's backup frequency is MOST closely related to which of the following?

  • RPO

  • RTO

  • MTD

  • RSL

Correct answer: RPO

A business continuity and disaster recovery (BC/DR) plan uses various business requirements and metrics, including:

  • Recovery Time Objective (RTO): The RTO is the amount of time that an organization is willing to have a particular system be down. This should be less than the maximum tolerable downtime (MTD), which is the maximum amount of time that a system can be down before causing significant harm to the business.
  • Recovery Point Objective (RPO): The RPO measures the maximum amount of data that the company is willing to lose due to an event. Typically, this is based on the age of the last backup when the system is restored to normal operations.
  • Recovery Service Level (RSL): The RSL measures the percentage of compute resources needed to keep production environments running while shutting down development, testing, etc.

63.

Which of the following network security controls is used to manage access to certain critical or sensitive resources?

  • Network Security Groups

  • Traffic Inspection

  • Geofencing

  • Zero Trust Network

Correct answer: Network Security Groups

Network security controls that are common in cloud environments include:

  • Network Security Groups: Network security groups (NSGs) limit access to certain resources, such as firewalls or sensitive VMs or databases. This makes it more difficult for an attacker to access these resources during their attacks.
  • Traffic Inspection: In the cloud, traffic monitoring can be complex since traffic is often sent directly to virtual interfaces. Many cloud environments have traffic mirroring solutions that allow an organization to see and analyze all traffic to its cloud-based resources.
  • Geofencing: Geofencing limits the locations from which a resource can be accessed. This is a helpful security control in the cloud, which is accessible from anywhere.
  • Zero Trust Network: Zero trust networks apply the principle of least privilege, where users, applications, systems, etc. are only granted the access and permissions that they need for their jobs. All requests for access to resources are individually evaluated, so an entity can only access those resources for which they have the proper permissions.

64.

Ines is working with the Disaster Recovery (DR) team. They have been able to determine that they can only tolerate losing the last two hours worth of data at the most critical point in the work day/year. At the least critical, they could tolerate losing 24 hours worth of data. They have settled on the most cost effective backup solution that will ensure that they will not lose more than four hours of data. 

What have they defined?

  • Recovery Point Objective (RPO)

  • Recovery Time Objective (RTO)

  • Maximum Tolerable Downtime (MTD)

  • Service Delivery Objective (SDO)

Correct answer: Recovery Point Objective (RPO)

A Recovery Point Objective (RPO) is defined in terms of data loss, not time period; it defines the maximum quantity of data that may be tolerated during a disaster recovery incident. Increasing scheduled backups can decrease the value. 

The RTO is the time window that the team has to do the work of bringing the recovery site online.

The MTD is a combination of the RTO plus time needed for emergency evacuations, life-safety issues, chaos, damage assessment, and so on. That is the total amount of time that a server or service can be offline before causing a great deal of damage to the business.

The SDO is the recovery level. Once a failover has occurred to the back-up systems/cloud/site/etc., it must be functional to a certain level to ensure that it will help the business. It is not normally expected that functionality would be completely normal on the back-up systems. So the SDO would be around 80%. This would mean something like the following: If the server normally processes 100 calls an hour, it must be able to process 80 or the business is likely to still experience a great deal of damage.

65.

Tristan, a security engineer at Acme Inc., is reviewing a list of all the components in a recent software program Acme Inc. developers created. 

What term BEST describes this type of list? 

  • SBOM

  • OSS

  • SCA

  • SAST

Correct answer: SBOM

A software bill of materials (SBOM) lists all components used as part of a software product. 

Open-source software (OSS) is any software that uses an open-source license. 

Software composition analysis (SCA) technology focuses on identifying software dependencies and creating inventories. 

Static application security testing (SAST) analyzes source code for bad coding practices and vulnerabilities. 

66.

Which of the following types of testing looks for vulnerabilities that could cause the software to exhibit unexpected behavior?

  • Abuse Testing

  • Unit Testing

  • Integration Testing

  • Regression Testing

Correct answer: Abuse Testing

Abuse testing is when software is tested to see that it properly handles unexpected, malformed, or malicious inputs. It verifies that software not only performs correctly when used correctly but also is secure and robust when something unexpected happens.

Unit, integration, and regression testing verify that the software meets requirements and exhibits desirable behavior when used correctly.

67.

Daniel is working at a relatively new software company that has succeeded in building an application that served a critical need in his market vertical. In determining how their customers are going to access this application, which is being offered as a Software as a Service (SaaS) product, they have determined that the customer needs to verify their users and then communicate the levels of privileges each should have within the individual accounts. 

What solution would work the best here?

  • Open Identification (OpenID) and Open Authorization (OAuth) together

  • Open Identification (OpenID) alone will handle what the customer needs

  • Security Assertion Markup Language (SAML) with Open Identification (OpenID)

  • Web Services Federation (WS-Federation) combined with Security Assertion Markup Language (SAML)

Correct answer: Open Identification (OpenID) and Open Authorization (OAuth) together

OpenID can be used to identify and authenticate each user. Then OAuth can be used to specify the level of privileges each user has. The full procedure is Identification, Authentication, Authorization, Accountability (IAAA). OpenID handles identification and authentication. Then OAuth for authorization.

SAML and WS-Federation are two more protocols that also perform identification and authentication. So, combining SAML with OpendId or WS-Federation does not provide a complete solution. OpenID by itself does not either.

To accomplish the needs presented in the scenario, both authentication and authorization are needed.

68.

In log management, what defines which categories of events are and are NOT written into logs?

  • Clipping level

  • Transparency level

  • Quality level

  • Retention level

Correct answer: Clipping level

Many systems and apps allow you to customize what data is written to log files based on the importance of the data. The clipping level determines which events, such as user authentication events, informational system messages, and system restarts, are written in the logs and which are ignored. Clipping levels are used to ensure that the correct logs are being accounted for. They are commonly called thresholds.

Transparency is the visibility of something. Quality is how good something is.Retention is holding on to something. Those three words do not quite apply with the word level. So, this question is mainly about the term clipping level.

69.

Structured and unstructured storage pertain to which of the three cloud service models?

  • Platform as a Service (PaaS)

  • DataBase as a Service (DBaaS)

  • Infrastructure as a Service (IaaS)

  • Software as a Service (SaaS)

Correct answer: Platform as a Service (PaaS)

Each cloud service model uses a different method of storage as shown below:

  • Platform as a Service (PaaS) uses the terms of structured and unstructured to refer to different storage types.
  • Infrastructure as a Service (IaaS) uses the terms of volume and object to refer to different storage types.
  • Software as a Service (SaaS) uses content and file storage and information storage and management to refer to different storage types.

The use of these terms begins with the Cloud Security Alliance, and it would be a good idea to read the CSA guidance document. As of the time this question was written in 2022, the CSA put out version 4. Version 5 is expected soon.

DBaaS is not one of the three cloud service models.

70.

Deco is the information security professional for an organization that specializes in market research for an athletic supply company. There is a great deal of information that needs to be processed to determine which products are of most interest in which gyms around the world. The corporation has developed a data lake that is stored in the cloud in a Platform as a Service, server-based system. It is necessary to ensure that the data is protected from changes, deletions, or leaks. Deco is working to create the processes that need to exist to protect that data and is working to determine who will be responsible for the data from determining the classification to which security controls need to be in place around that data. 

This individual would be known as which of the following?

  • Data owner

  • Data processor

  • Data custodian

  • Data controller

Correct answer: Data owner

A data owner is the party that maintains full responsibility and ownership of data. Data owners determine the appropriate controls that are necessary to protect that data, including its classification.

The data controller determines if and how personal data can be collected and how long it can be stored, basically at a policy level.

The data custodian is simply whoever is in possession of the data, which includes IT, end users, senior staff, etc.

The data processor handles data (processes and stores the data) within their systems. The data processor is not the employee of the data controller. Think of a payroll company that handles that process for a small business. The separate payroll company would be the data processor. It does include holding or storing of data, so our cloud providers are data processors if personal data is stored there. This is per GDPR in Europe,

71.

Which of the following aspects of data retention policies is the MOST relevant to restoring from DR/BC backups? 

  • Archiving and Retrieval Procedures and Mechanisms

  • Data Classification

  • Regulatory Requirements

  • Retention Periods

Correct answer: Archiving and Retrieval Procedures and Mechanisms

Data retention policies define how long an organization stores particular types of data. Some of the key considerations for data retention policies include:

  • Retention Periods: Defines how long data should be stored. This usually refers to archived data rather than data in active use.
  • Regulatory Requirements: Various regulations have rules regarding data retention. These may mandate that data only be retained for a certain period or the minimum time that data should be saved. Typically, the first refers to personal data, while the second is business and financial data or security records.
  • Data Classification: The classification level of data may impact its retention period or how the data should be stored and secured.
  • Retention Requirements: In some cases, specific requirements may exist for how data should be stored. For example, sensitive data should be encrypted at rest. Data retention may also be impacted by legal holds.
  • Archiving and Retrieval Procedures and Mechanisms: Different types of data may have different requirements for storage and retrieval. For example, data used as backups as part of a business continuity/disaster recovery (BC/DR) policy may need to be more readily accessible than long-term records. 
  • Monitoring, Maintenance, and Enforcement: Data retention policies should have rules regarding when and how the policies will be reviewed, updated, audited, and enforced.

72.

A software development corporation has built an Infrastructure as a Service (IaaS) environment for their software developers to use when building their products. When a virtual machine is running, the software developer will use that platform to build and test their code. The running machines require a type of storage that allows the operating system the ability to store temp files and use as a swap space. 

What type of storage is used for that?

  • Ephemeral 

  • Structured 

  • Object

  • Volume 

Correct answer: Ephemeral

Cloud storage comes in many shapes and flavors. The storage used by virtual machines to temporarily store files and to use for swap files is called ephemeral. Ephemeral means temporal or fleeting. It will disappear when the virtual machine shuts down. It is not for persistent storage.

Persistent storage includes structured, object, volume, unstructured, block, etc. 

Each cloud service model uses a different method of storage as shown below:

  • Software as a Service (SaaS): content and file storage, information storage and management
  • Platform as a Service (PaaS): structured, unstructured, or block and blob
  • Infrastructure as a Service (IaaS): volume, object

Structured is confusing because it is used to describe a type of data (databases) and a type of data storage. They are not the same thing. Structured storage is a specific allocation of storage space. Block storage is a type of structured storage. It allocates space in units of space (e.g., 16KB).

A volume is analogous to a C:/ directory on a computer. It is often allocated using block storage.

Objects are files. Object storage would be a flat file system. Objects could have metadata attached to them. 

Objects are stored in volumes or blocks.

73.

Which of the following is NOT an example of a functional security requirement in the cloud?

  • Availability

  • Portability

  • Interoperability

  • Vendor lock-in

Correct answer: Availability

Functional requirements refer to aspects of a system, device, or user that are necessary for it to do its job. Common examples of functional security requirements in the cloud are portability, interoperability, and vendor lock-in. Availability is a nonfunctional requirement. 

74.

Rashid has been working with his customer to understand the Indication of Compromise (IoC) that they have seen within their Security Information and Event Manager (SIEM). The logs show that a bad actor infiltrated their organization through a phishing email. Once the bad actor was in, they traversed the network till they gained access to a firewall. Once they were in the firewall, the bad actor assumed the role the firewall had to access the database. The database was then copied by the bad actor. 

This is an example of which type of threat?

  • Data breach

  • Command injection

  • Advanced persistent threat (APT)

  • Account hijacking

Correct answer: Data breach

A data breach occurs when data is leaked or stolen, either intentionally or unintentionally. This is not an Advanced Persistent Threat (APT). An APT requires an advanced level of skill from bad actors who usually will be attacking for one nation state against another. 

Account hijacking is a step along the way when the bad actor assumed the role that the firewall had to access the database. The whole attack was for the purpose of stealing the data, which is a data breach. 

Command injection occurs when a bad actor types a command into a field that is interpreted by the server. This is similar to an SQL injection.

75.

Thian is performing a risk assessment with his information security team for a hospital. They have determined the likelihood and probable impact level of the most serious problems they believe they are susceptible to. Which of the following statements regarding responding to risk is FALSE?

  • There is never an appropriate scenario in which to accept a risk

  • Organizations may opt to implement procedures and controls to ensure that a specific risk is never realized

  • An organization can transfer risk via insurance policies to cover financial costs of successful exploits

  • Risk mitigation typically depends on the results of a cost-benefit analysis

Correct answer: There is never an appropriate scenario in which to accept a risk

There are times when a company may choose to simply accept a risk rather than do anything to deal with it. This is often done when the cost of mitigating the risk outweighs the cost of simply dealing with the consequences if the risk was to occur. 

A company can opt to implement procedures and controls to ensure that a specific risk is never realized. Nothing is ever perfect, but that can certainly be their goal.

Insurance policies are a common risk transference option. It has the intention of covering the financial costs of successful exploits. It is necessary to ensure that the conditions of the policy are understood. It can easily be stated that if the company does not implement the appropriate controls, it will not cover the costs of the exploitation.

Performing a risk assessment allows a corporation to understand the likelihood and expected impact of specific scenarios. Based on that, an appropriate risk mitigation can be chosen based on a cost-benefit analysis.

76.

Which of the following is PRIMARILY a concern in multi-cloud environments?

  • Interoperability

  • Resiliency

  • Availability

  • Performance

Correct answer: Interoperability

Some important cloud considerations have to do with its effects on operations. These include:

  • Availability: The data and applications that an organization hosts in the cloud must be available to provide value to the company. Contracts with cloud providers commonly include service level agreements (SLAs) mandating that the service is available a certain percentage of the time.
  • Resiliency: Resiliency refers to the ability of a system to weather disruptions. Resiliency in the cloud may include the use of redundancy and load balancing to avoid single points of failure.
  • Performance: Cloud contracts also often include SLAs regarding performance. This ensures that the cloud-based services can maintain an acceptable level of operations even under heavy load.
  • Maintenance and Versioning: Maintenance and versioning help to manage the process of changing software and other systems. Updates should only be made via clear, well-defined processes.
  • Reversibility: Reversibility refers to the ability to recover from a change that went wrong. For example, how difficult it is to restore on-site operations after a transition to an outsourced service (like a cloud provider).
  • Portability: Different cloud providers have different infrastructures and may do things in different ways. If an organization’s cloud environment relies too much on a provider’s unique implementation or the provider doesn’t offer easy export, the company may be stuck with that provider due to vendor lock-in.
  • Interoperability: With multi-cloud environments, an organization may have data and services hosted in different providers’ environments. In this case, it is important to ensure that these platforms and the applications hosted on them are capable of interoperating.
  • Outsourcing: Using cloud environments requires handing over control of a portion of an organization’s infrastructure to a third party, which introduces operational and security concerns.

77.

Shai is the information security manager responsible for the build and deployment of a particular server into a server-based Platform as a Service (PaaS). This particular server handles very specialized data and has a requirement for security, isolation, and specialized configurations. What deployment option would be best for this situation?

  • Standalone host

  • Load balanced server cluster

  • Redundant servers with Dynamic Resource Scheduling (DRS)

  • A server cluster with Dynamic Optimization (DO)

Correct Answer: Standalone host

A cloud standalone host refers to a Virtual Machine (VM) or physical server that operates independently in a cloud computing environment without being part of a larger cluster or infrastructure. It is a single, self-contained computing instance that functions as a standalone entity within the cloud infrastructure. Standalone hosts are particularly useful when there is a need for dedicated and independent computing resources for specific workloads or applications that require high performance, isolation, or specialized configurations. They provide a flexible and customizable environment within the cloud infrastructure, allowing organizations to meet their unique computing requirements.

Redundant servers and server clusters do not isolate systems. The systems/servers/VMs work together to ensure continued function. Load balancing, server clusters, redundant servers, and DO and DRS all have the prime goal of ensuring availability. A standalone host has more of a confidentiality intention.

78.

Rogelio is working with the deployment team to deploy 50 new servers as virtual machines (VMs). The servers that he will be deploying will be a combination of different Operating Systems (OS) and Databases (DB). When deploying these images, it is critical to make sure...

  • That the golden images are always used for each deployment

  • That the VMs are updated and patched as soon as they are deployed

  • That the VM images are pulled from a trusted external source

  • That the golden images are used and then patched as soon as it is deployed

Correct answer: That the golden images are always used for each deployment

The golden image is the current and up-to-date image that is ready for deployment into production. If an image needs patching, it should be patched offline and then the new, better version is turned into the new current golden image. Patching servers in deployment is not the best idea. Patching the image offline is the advised path to take.

The golden image should be built within a business, not pulled from an external source, although there are exceptions. It is critical to know the source of the image (IT or security) and to make sure that it is being maintained and patched on a regular basis.

79.

An audit must have parameters to ensure the efforts are focused on relevant areas that can be effectively audited. Setting these parameters for an audit is commonly known as which of the following?

  • Audit scope restrictions

  • Audit remediation

  • Audit objectives

  • Audit policy

Correct answer: Audit scope restrictions

Audit scope restrictions refer to the process of defining parameters for an audit. The rationale for audit scope restrictions is that audits are costly and often require the involvement of highly skilled content experts. Additionally, system auditing can impair system performance, and in some situations necessitate the shutdown of production systems. Carefully crafted scope constraints can help ensure that production systems are not harmed.

Audit objectives would cover the reason for the audit and what they want to know as a result of the audit.

Audit remediation could be the recommendations that the auditor provides after the audit assessment is complete. This would be based on any of the findings that the auditor has. A finding is something that the auditor finds that does not match the requirements based on the objectives of the audit.

The policy would contain management's goals and objectives on the topic of audits.

80.

The cloud enables operations in geographically dispersed places and increases hardware and data redundancy. What is the end result of this in terms of disaster recovery and business continuity?

  • Lower Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO)

  • Higher Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO)

  • Lower Recovery Point Objectives (RPO) and Recovery Service Level (RSL)

  • Lower Recovery Time Objectives (RTO) and Higher Recovery Point Objectives (RPO)

Correct answer: Lower Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO)

The capacity to operate in geographically remote locations and to provide increased hardware and data redundancy results in lower Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) for disaster recovery and business continuity. It is easier to bring new/replacement systems up in other regions if there has been a major disaster. The RTO is the amount of time it takes to bring a system on line. With images, you just spin up a new image on a different server as long as you have a copy of that image. Backing up data can also be easier, which reduces the RPO. RPO is the amount of data that the business can tolerate losing.

The Recovery Service Level (RSL) measures the percentage of the total production service level that needs to be restored to meet BCDR objectives.