ISC2 CCSP Exam Questions

Page 6 of 50

101.

Which of the following, published by the Cloud Security Alliance (CSA), provides a detailed framework and approach for handling controls that are pertinent and applicable in a cloud environment?

  • Cloud Controls Matrix (CCM)

  • Consensus Assessment Initiative Questionnaire (CAIQ)

  • National Institute of Standards & Technology (NIST) Special Publication 800-53

  • International Standards Organization (ISO)/International Electrotechnical Commission (IEC) 27017

Correct answer: Cloud Controls Matrix (CCM)

The Cloud Controls Matrix (CCM) outlines a detailed approach for handling controls in a cloud environment. The Cloud Controls Matrix was developed and published by the Cloud Security Alliance. 

The CAIQ is a questionnaire that a Cloud Service Provider (CSP) can fill out and then register themselves with the Security, Trust, Assurance, and Risk (STAR) Registry.

NIST and ISO are different organizations than the CSA.

NIST SP 800-53 (the latest revision is 5, which is not something you need to worry about for the exam) is titled "Security and privacy controls for information systems and organizations." Overly simplified, it is a list of security controls.

ISO/IEC 27017 is also overly simplified to a list of security controls. This document is specific to cloud controls. Its proper title is "Code of practice for information security controls based on ISO/IEC 27002 for cloud services."

Neither (ISC)2 nor the CSA mention each other in their materials. It is unknown if this exam is still a joint venture between the two companies. However, that is how it started, so it would not hurt to know about the CCM and CAIQ before you take the exam. The CSA guidance document and their SecaaS documents are still good reads in preparation for the exam.

102.

A company using Platform as a Service (PaaS) has discovered that their computing environment has gotten very complex. They are looking for a technology that will assist them in managing the deployment and provisioning of all the resources that they now have. 

Which technology can this organization implement to assist the administrators in a more agile and efficient manner than manual management?

  • Orchestration

  • Dynamic host configuration protocol

  • Management plane

  • Measured service

Correct answer: Orchestration

Orchestration enables agile and efficient provisioning and management on demand and at a great scale. Common tools used today are puppet, chef, ansible, and salt.

Dynamic host configuration protocol (DHCP) was similar in its use long ago in the earlier days of local area networks. It allows computers to obtain IP addresses dynamically. This is still needed but is insufficient for managing and provisioning cloud assets.

The management plane is the administrators' connection to the cloud. This allows them to configure and manage, but it is not going to automate anything. It is the equivalent of establishing an SSH connection to a router. It is simply a protected connection.

Measured service is what enables cloud service providers to bill for resources their customers consume. 

103.

Which of the following attributes of evidence relates to supporting a certain conclusion?

  • Convincing

  • Authentic

  • Accurate

  • Admissible

Correct answer: Convincing

Typically, digital forensics is performed as part of an investigation or to support a court case. The five attributes that define whether evidence is useful include:

  • Authentic: The evidence must be real and relevant to the incident being investigated.
  • Accurate: The evidence should be unquestionably truthful and not tampered with (integrity).
  • Complete: The evidence should be presented in its entirety without leaving out anything that is inconvenient or would harm the case.
  • Convincing: The evidence supports a particular fact or conclusion (e.g., that a user did something).
  • Admissible: The evidence should be admissible in court, which places restrictions on the types of evidence that can be used and how it can be collected (e.g., no illegally collected evidence).

104.

Batu works with the DevOps team. He is an information security professional who has been tasked with ensuring that the software is properly tested. They have added Open Source Software (OSS) to their application. What is the best way to test and validate this OSS?

  • Static Application Security Testing (SAST) tools in conjunction with Interactive Application Security Testing (IAST) tools

  • Dynamic Application Security Testing (DAST) tools in conjunction with Runtime Application Self-Protection (RASP) tools

  • Interactive Application Security Testing (IAST) tools only

  • Static Application Security Testing (SAST) tools only

Correct answer: Static Application Security Testing (SAST) tools in conjunction with Interactive Application Security Testing (IAST) tools

Given that you are utilizing well-known and well-supported Open-Source Software (OSS), performing Static Application Security Testing (SAST) to identify vulnerabilities and then implementing Interactive Application Security Testing (IAST) to detect additional security issues in real time would be the best of these options.

SAST will analyze the lines of code, which is possible since it is open source. This alone is not as good as combining it with IAST.

IAST analyzes the application with visibility to the active lines of code that are in use simultaneously. This alone is not as good as combining it with SAST.

RASP is self-protection that is added to the the application. It is not a testing method.

DAST is analyzing the running application for vulnerabilities visible to the user and therefore possibly exploitable by the bad actor. This is good to do. However, combining it with RASP for the sake of testing is not the best combination, since RASP is not testing.

105.

Olivia, an information security manager, is working on the Disaster Recovery (DR) team for a medium-sized government contractor. They provide a service for the government that has a requirement of being highly available. Which cloud-based strategy can provide the fastest Recovery Time Objective (RTO) for a critical application in the event of a disaster?

  • Leveraging a cloud provider's infrastructure for real-time replication and failover of the application and data

  • Creating regular backups of the application and data to an on-premises storage system

  • Implementing a hybrid cloud model with a secondary data center for failover and recovery

  • Replicating the application and data to multiple geographically dispersed regions within a cloud provider's infrastructure

Correct answer: Leveraging a cloud provider's infrastructure for real-time replication and failover of the application and data

Leveraging the cloud provider's infrastructure with real-time replication allows for immediate failover in case of a disaster. With real-time replication, the application and data are continuously synchronized between primary and secondary environments, ensuring minimal data loss and the ability to quickly switch to the secondary environment for seamless operation.

Regular backups is always a good idea. An even better idea is to test those backups. However, the questions is about the speed it takes to do the recovery work. If the question was about the Recovery Point Objective (RPO), then the data backup strategy would be critical to look at.

A secondary data center is an expensive option, especially when we are trying to leverage the cloud.

Replicating the application and data to multiple geographically dispersed regions is the next best answer. However, the question does not give us specifics that drive us to that answer. So, the more generic "leveraging a cloud provider's infrastructure" is a better answer.

106.

Which of the following describes the cloud's ability to grow over time as demand increases?

  • Scalability

  • Elasticity

  • Agility

  • Mobility

Correct answer: Scalability

  • Elasticity refers to a system’s ability to grow and shrink on demand.
  • Scalability refers to its ability to grow as demand increases.
  • Agility and mobility are not terms used to describe cloud environments.
     

107.

Sebastian is working on the contract negotiations with a cloud provider. One of the concerns that they have is the division of responsibility between them, the Cloud Customer (CC) and the Cloud Service Provider (CSP). One of the options that they are looking at is Platform as a Service (PaaS). 

In the PaaS deployment model, who would be responsible for network controls?

  • Cloud Service Provider (CSP)

  • Cloud Customer (CC)

  • Both customer and provider

  • Cloud regulators

Correct answer: Cloud Service Provider (CSP)

In both the server-based and the server-less deployment options within PaaS, the CSP is responsible for the network controls. This is also true for Software as a Service (SaaS).

In Infrastructure as a Service (IaaS), it would be shared. The CSP is responsible for the physical network controls. The CC is responsible for the virtual network controls.

Cloud regulators are not responsible. They may assess controls. They may assess fines. But they are not responsible.

108.

Amal is the CIO of Acme Inc. Amal and the information security team are working with the information technology (IT) team to determine if they should move from an on-premises data center into an Infrastructure as a Service (IaaS) virtual data center. Amal wants to determine whether cloud computing is the right solution in this case. 

Which technique will BEST help Amal determine if migrating the on-premise data center to the cloud is a good business decision? 

  • Cost-benefit analysis

  • Proof of concept

  • Return on investment calculation for the IaaS platform 

  • Business impact analysis 

Correct answer: Cost-benefit analysis 

Any organization considering moving from an on-premises solution to the cloud should first perform a cost-benefit analysis to ensure that the decision makes sense for the company. 

If the cost-benefit is looking good, then the other answer options can be done. Arguably, you need a team of cloud experts to perform a proper cost-benefit. However, the answer says to hire a team and that is probably more than needed before the initial cost-benefit analysis.

If it looks like the cost-benefit is looking good, then use cloud experts to put together a proof of concept trial to ensure that the technology will work properly for the business. 

While calculating return on investment (ROI) is useful, calculating the IaaS ROI without considering the data center's costs and benefits (including ROI) is not the best choice. 

Business impact analysis (BIA) focuses on identifying the business impact if an asset, system, or process is degraded or lost. 

109.

An organization is building a new data center. They need to ensure that proper heating and cooling are implemented. What is the recommended minimum and maximum temperature for a data center?

  • 64.4-80.6 degrees F/18-27 degrees C

  • 60.1-75.2 degrees F/15-24 degrees C

  • 62.2-81.0 degrees F/16-27 degrees C

  • 59.5-79.5 degrees F/15-26 degrees C

Correct answer: 64.4-80.6 degrees F/18-27 degrees C

According to ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers), the recommended temperature for a data center is a minimum of 64.4 degrees F, and a maximum of 80.6 degrees F. This is 18 - 27 degrees C.  

It is possible that you need this for the test. A common question is "Do I need to learn the other measurement standards?" (If I know Fahrenheit, do I have to learn Celsius and vice versa?) If it is on the test, you'll want to know both measurements.

110.

A Chief Executive Officer (CEO) has tasked their Chief Information Security Officer (CISO) with updating the policy that informs a user regarding what they are allowed to use cloud storage for regarding both corporate and personal uses. 

This acceptable use policy is an example of what type of policy?

  • Functional

  • Guideline

  • Baseline

  • Procedure

Correct answer: Functional

Acceptable use policies are an example of functional policies. Functional policies set guiding principles for individual business functions and activities. NIST says that you should have fewer than 20 of these, and ISACA says you should have 24 or fewer. Examples would be data security, identity and access management (IAM), acceptable use, and business continuity management (BCM).

The organizational level policy should show management's commitment to security, where the functional policies then start to explore certain topics. 

Standards, baselines, procedures, and guidelines are then used to detail how the corporation will fulfill the functional policies. Baselines detail technical configurations, and the procedures say how to do something step by step.

111.

A software development company is looking for a way to be able to identify the third-party and open-source software components that are in their software. What can they use?

  • Software Composition Analysis

  • Application Security Verification Standard 

  • Interactive Application Security Testing

  • Penetration testing

Correct answer: Software Composition Analysis

Software Composition Analysis (SCA) is a security practice that involves the identification and analysis of third-party and open-source software components used in a software application. SCA helps organizations understand and manage the security risks associated with the software components they utilize, including known vulnerabilities, licensing issues, and potential code vulnerabilities.

The Application Security Verification Standard (ASVS) is a comprehensive framework and set of guidelines developed by the Open Web Application Security Project (OWASP). ASVS provides a standardized methodology for evaluating the security of web applications and APIs. It serves as a valuable resource for organizations, security professionals, and developers to assess the security posture of their applications and implement necessary security controls.

Interactive Application Security Testing (IAST) is a dynamic application security testing technique that combines elements of both static analysis and dynamic analysis to identify security vulnerabilities in software applications. IAST aims to provide more accurate and comprehensive security testing by analyzing an application's behavior during runtime.

Penetration testing, also known as ethical hacking or pen testing, is a security assessment technique that involves simulating real-world attacks on software systems to identify vulnerabilities and assess their potential impact. The goal of penetration testing is to identify security weaknesses before malicious attackers can exploit them.

112.

Yamin works for a Cloud Service Provider (CSP) as a technician in one of their data centers. She has been setting up the Fibre Channel equipment over the last week. What part of the cloud is she building?

  • Storage

  • Compute

  • Network

  • Cabeling

Correct answer: Storage

The three things that must be built to create a cloud data center are Compute, Network, and Storage. Storage is where the data will be at rest. This involves building Storage Area Networks (SAN). There are two primary SAN protocols: one is Fibre Channel and the other is IP-based Small Computer Serial Interface (iSCSI). What also needs to be constructed within that is how the storage is allocated. Will it be allocated as block storage, file storage, raw storage, etc?

Compute is the computation capability that comes along with a computer. That could be a virtual server of a virtual desktop interface.

The network element is the ability to transmit data to or from storage, to or from the compute elements, and out of the cloud to other destinations. This involves both physical networks and the virtual networks created within a server.

Cables are needed to connect all the physical equipment together. There is even virtual cables found within Infrastruture as a Service (IaaS) environments. This is part of the network element, though.

113.

Which of the following focuses on personally identifiable information (PII) as it pertains to financial institutions? 

  • Gramm-Leach-Bliley Act (GLBA)

  • Health Insurance Portability Accountability Act (HIPAA)

  • General Data Protection Regulation (GDPR)

  • Sarbanes-Oxley (SOX)

Correct answer: Gramm-Leach-Bliley Act (GLBA)

The Gramm-Leach-Bliley Act is a U.S. act officially named the Financial Modernization Act of 1999. It focuses on PII as it pertains to financial institutions, such as banks. 

HIPAA is a U.S. regulation that is concerned with the privacy of protected healthcare information and healthcare facilities. 

GDPR is an EU specific regulation that encompasses all organizations in all different industries. 

SOX is a U.S. regulation about protecting financial data. 

114.

Which of the following best practices supports vulnerability and patch management?

  • Scheduled downtime and maintenance

  • Isolated network

  • Random SPOF generation and storage 

  • Robust access controls

Correct answer: Scheduled downtime and maintenance

Some best practices for designing, configuring, and securing cloud environments include:

  • Redundancy: A cloud environment should not include single points of failure (SPOFs) where the outage of a single component brings down a service. High availability and duplicate systems are important to redundancy and resiliency.
  • Scheduled Downtime and Maintenance: Cloud systems should have scheduled maintenance windows to allow patching and other maintenance to be performed. This may require a rotating maintenance window to avoid downtime.
  • Isolated Network and Robust Access Controls: Access to the management plane should be isolated using access controls and other solutions. Ideally, this will involve the use of VPNs, encryption, and least privilege access controls.
  • Configuration Management and Change Management: Systems should have defined, hardened default configurations, ideally using infrastructure as code (IaC). Changes should only be made via a formal change management process.
  • Logging and Monitoring: Cloud environments should have continuous logging and monitoring, and vulnerability scans should be performed regularly.

115.

An organization has just completed the design phase of developing their Business Continuity and Disaster Recovery (BC/DR) plan. What is the next step for this organization?

  • Implement the plan

  • Test the plan

  • Revise

  • Assess risk 

Correct answer: Implement the plan

The steps of developing a BC/DR plan are as follows: Define scope, gather requirements, analyze, assess risk, design, implement, test, report, and finally, revise. Once an organization has completed all the design phases, they are ready to implement their BC/DR plan. Even though the plan has already gone through design, it will likely require some changes (both technical and policy-wise) during implementation. The key to this is that the work implement is used in many different ways. To people who work in the production environment, it means that "it" is placed into the production environment, whatever "it" is that we are talking about. However, when we are dealing with BC/DR, the alternate site or cloud must be built before it can be tested, which hopefully all occurs before we need it.

116.

An information security professional has been asked to review a piece of completed software to ensure that there are no defects and that the code is free of bugs. What phase of the software development lifecycle is currently being described?

  • Testing 

  • Development

  • Analysis

  • Maintenance 

Correct answer: Testing

During the testing phase of the SLDC, the completed code is reviewed for problems. It's checked to ensure that it is functioning and operating as expected. This includes having quality assurance checking the software for defects and bugs. During testing, the code is also checked using security scans to ensure that it is secure.

The development phase should include testing. As soon as there are lines of code, they can be analyzed with Static Application Security Testing. In the question, though, it says "completed software," implying this phase is over.

Analysis is not a commonly used name for this phase, but it would be close. If testing was not here, it could have been the right answer.

Maintenance means the software is in production, and testing will occur (or should occur) before patches or changes are deployed. But again, the key to the question is "completed software." It leaves room for the possibility that it has not been deployed yet, so we are in the testing phase.

117.

Server and data center redundancy are solutions designed to primarily address which of the following?

  • Resiliency

  • Maintenance

  • Interoperability

  • Reversibility

Correct answer: Resiliency

Some important cloud considerations have to do with its effects on operations. These include:

  • Availability: The data and applications that an organization hosts in the cloud must be available to provide value to the company. Contracts with cloud providers commonly include service level agreements (SLAs) mandating that the service is available a certain percentage of the time.
  • Resiliency: Resiliency refers to the ability of a system to weather disruptions. Resiliency in the cloud may include the use of redundancy and load balancing to avoid single points of failure.
  • Performance: Cloud contracts also often include SLAs regarding performance. This ensures that the cloud-based services can maintain an acceptable level of operations even under heavy load.
  • Maintenance and Versioning: Maintenance and versioning help to manage the process of changing software and other systems. Updates should only be made via clear, well-defined processes.
  • Reversibility: Reversibility refers to the ability to recover from a change that went wrong. For example, how difficult it is to restore original operations after a transition to an outsourced service.
  • Portability: Different cloud providers have different infrastructures and may do things in different ways. If an organization’s cloud environment relies too much on a provider’s unique implementation or the provider doesn’t offer easy export, the company may be stuck with that provider due to vendor lock-in.
  • Interoperability: With multi-cloud environments, an organization may have data and services hosted in different providers’ environments. In this case, it is important to ensure that these platforms and the applications hosted on them are capable of interoperating.
  • Outsourcing: Using cloud environments requires handing over control of a portion of an organization’s infrastructure to a third party, which introduces operational and security concerns.

118.

Which of the following is MOST relevant to an organization's network of applications and APIs in the cloud?

  • Service Access

  • User Access

  • Privilege Access

  • Physical Access

Correct answer: Service Access

Key components of an identity and access management (IAM) policy in the cloud include:

  • User Access: User access refers to managing the access and permissions that individual users have within a cloud environment. This can use the cloud provider’s IAM system or a federated system that uses the customer’s IAM system to manage access to cloud services, systems, and other resources.
  • Privilege Access: Privileged accounts have more access and control in the cloud, potentially including management of cloud security controls. These can be controlled in the same way as user accounts but should also include stronger access security controls, such as mandatory multi-factor authentication (MFA) and greater monitoring.
  • Service Access: Service accounts are used by applications that need access to various resources. Cloud environments commonly rely heavily on microservices and APIs, making managing service access essential in the cloud.

Physical access to cloud servers is the responsibility of the cloud service provider, not the customer.

119.

A hospital has identified a nurse who has been breaking a data policy. When the nurse has a few minutes of free time, they use non-administrative credentials to browse patient records. The nurse has been using this information to blackmail some of these patients. 

What term describes this nurse? 

  • Internal threat

  • APT

  • MitM

  • Rouge administrator 

Correct answer: Internal threat

The nurse in the example is an insider threat who has become a malicious insider. A malicious insider is any user with legitimate network or system access who uses their access for purposes other than those authorized. Malicious insiders are regularly listed as one of the top sources of breaches and compromises. The best way to mitigate the risk of the malicious insider is to implement active monitoring and auditing. 

A rouge administrator is a specific type of insider threat with elevated privileges. The nurse in this case did not have elevated privileges. 

An advanced persistent threat (APT) is a serious threat that usually originates from a nation-state that is attacking another. The advanced part refers to the level of sophistication in the coding of the malicious software and its deployment. The malware is in place and functioning. It would be over a long period. A commonly used example to describe APTs is Stuxnet.

A man-in-the-middle (MitM) setup would exist between the sender and receiver. The nurse is not between two points or two parties in transmission but is browsing data.

120.

Which essential characteristic of the cloud says that an organization only pays for what it uses rather than maintaining dedicated servers, operating systems, virtual machines, and so on?

  • Measured service

  • On-demand self-service

  • Broad network access

  • Multi-tenancy

Correct answer: Measured service

Measured service means that Cloud Service Providers (CSP) bill for resources consumed. With a measured service, everyone pays for the resources they are using. 

On-demand self-service means that the user/customer/tenant can go to a web portal, select their service, configure it, and get it up and running without interaction with the CSP.

Broad network access means that as long as the user/customer/tenant has access to the network (the "cloud" is on), they will be able to use that service using standard mechanisms.

Multi-tenancy is a characteristic that exists with all cloud deployment models (public, private, and community). It means that there are multiple users/customers/tenants using the same physical server. The hypervisor has the responsibility of isolating them from each other. In a private cloud, the different users or tenants would be different business units or different projects. A good read is the free ISO standard 17788. Pay particular attention to the definition of multi-tenancy.