Think You're Ready?

Your Final Exam Before the Final Exam.
Dare to Take It?

A company receives reports about misconfigurations and vulnerabilities in a third-party hardware device that is part of its released products. Which of the following solutions is the best way for the company to identify possible issues at an earlier stage?

A. Performing vulnerability tests on each device delivered by the providers

B. Performing regular red-team exercises on the vendor production line

C. Implementing a monitoring process for the integration between the application and the vendor appliance.

D. Implementing a proper supply chain risk management program.

D.   Implementing a proper supply chain risk management program.

Explanation:
The core issue involves third-party hardware devices that are part of the company's released products. To identify misconfigurations and vulnerabilities at an earlier stage (i.e., before the products are released or deployed), the company needs a proactive, systematic approach to manage risks introduced by suppliers and vendors.

Why D is Correct:
A supply chain risk management (SCRM) program is a comprehensive framework designed to:

Assess vendors before procurement (e.g., evaluate their security practices, development lifecycle, and testing protocols).

Establish contractual requirements for security (e.g., requiring vendors to provide Software Bill of Materials (SBOMs), undergo audits, or share vulnerability disclosures).

Integrate security checks early in the supply chain (e.g., during design and manufacturing phases rather than after delivery).

Monitor for vulnerabilities specific to third-party components (e.g., subscribing to vendor security advisories).

This proactive approach helps identify and mitigate issues earlier in the product lifecycle, reducing the risk of releasing vulnerable products.

Why the other options are incorrect:

A) Performing vulnerability tests on each device delivered by the providers:
This is a reactive measure. Testing devices after they are delivered is too late—it occurs at the end of the supply chain. It also does not scale well and may not catch all issues (e.g., firmware vulnerabilities).

B) Performing regular red-team exercises on the vendor production line:
This is impractical and often not feasible. Vendors are unlikely to allow external red-team exercises on their production systems. Red-teaming is typically used for internal security assessments, not supply chain oversight.

C) Implementing a monitoring process for the integration between the application and the vendor appliance:
This is useful for detecting runtime issues but is reactive. It occurs after integration and does not address vulnerabilities inherent in the hardware device itself before it is integrated.

Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It emphasizes the importance of supply chain risk management as a proactive strategy to identify and mitigate vulnerabilities introduced by third-party components, aligning with frameworks like NIST SP 800-161.

The material finding from a recent compliance audit indicate a company has an issue with excessive permissions. The findings show that employees changing roles or departments results in privilege creep. Which of the following solutions are the best ways to mitigate this issue? (Select two). Setting different access controls defined by business area

A. Implementing a role-based access policy

B. Designing a least-needed privilege policy

C. Establishing a mandatory vacation policy

D. Performing periodic access reviews

E. Requiring periodic job rotation

A.   Implementing a role-based access policy
D.    Performing periodic access reviews

Explanation:
The core problem identified is privilege creep due to employees changing roles. This means users accumulate permissions over time because old access rights are not removed when they are no longer needed for their new position. The solutions must directly address this accumulation and ensure permissions align with current job functions.

Why A is Correct (Implementing a role-based access policy):
Role-Based Access Control (RBAC) is a fundamental solution to this exact problem. Instead of assigning permissions directly to users, permissions are assigned to roles (e.g., "Accountant," "Marketing Manager"). Users are then assigned to these roles. When an employee changes departments, their old role is simply removed, and their new role is assigned. This automatically revokes the old permissions and grants the new, appropriate ones, effectively preventing privilege creep by design.

Why D is Correct (Performing periodic access reviews):
Even with RBAC in place, processes can break down. Periodic user access reviews (also known as recertification) are a critical administrative control to catch and correct privilege creep. In these reviews, managers or system owners periodically attest to whether their employees' current access levels are still appropriate for their job functions. This process proactively identifies and removes excessive permissions that may have been missed during a role transition.

Why the Other Options Are Incorrect:

B. Designing a least-needed privilege policy:
While the principle of least privilege is the ultimate goal, this option describes a concept or principle, not an actionable solution to the problem of privilege creep. Implementing RBAC (Option A) is how you operationalize and enforce a least privilege policy. Therefore, A is a more direct and specific solution.

C. Establishing a mandatory vacation policy:
This is a detective control primarily used to uncover fraud (e.g., requiring an employee to take vacation forces someone else to perform their duties, potentially revealing fraudulent activity). It does not directly address the procedural issue of permissions not being removed during role changes.

E. Requiring periodic job rotation:
Job rotation is a security practice used to reduce the risk of fraud and collusion and to cross-train employees. It would actually exacerbate the problem of privilege creep, as more employees changing roles would lead to even more accumulated permissions if a proper process (like RBAC and access reviews) is not in place to manage the transitions.

Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It tests knowledge of identity and access management (IAM) best practices, specifically the controls used to implement and maintain the principle of least privilege and prevent authorization vulnerabilities like privilege creep. RBAC and access recertification are cornerstone practices for any mature IAM program.

A security engineer needs 10 secure the OT environment based on me following requirements

• Isolate the OT network segment

• Restrict Internet access.

• Apply security updates two workstations

• Provide remote access to third-party vendors

Which of the following design strategies should the engineer implement to best meet these requirements?

A. Deploy a jump box on the third party network to access the OT environment and provide updates using a physical delivery method on the workstations

B. Implement a bastion host in the OT network with security tools in place to monitor access and use a dedicated update server for the workstations.

C. Enable outbound internet access on the OT firewall to any destination IP address and use the centralized update server for the workstations

D. Create a staging environment on the OT network for the third-party vendor to access and enable automatic updates on the workstations.

B.   Implement a bastion host in the OT network with security tools in place to monitor access and use a dedicated update server for the workstations.

Explanation:
Let's evaluate how option B meets each requirement:

Isolate the OT network segment:
A bastion host (or jump server) acts as a single, hardened entry point into the OT network. This maintains isolation by ensuring all external access funnels through a tightly controlled gateway, preventing direct connections to critical OT assets.

Restrict Internet access:
The bastion host does not require general internet access for the entire OT network. Internet access can be restricted to only what is necessary (e.g., for the bastion host or update server to fetch updates), and the dedicated update server can be configured to pull updates in a controlled manner (e.g., from a trusted source).

Apply security updates to workstations:
A dedicated update server within the OT network can be used. This server can be periodically updated (via a secure process, such as manual transfer from an internet-connected system) and then distribute patches to OT workstations without requiring them to have direct internet access.

Provide remote access to third-party vendors:
The bastion host is specifically designed for secure remote access. Third-party vendors can connect to the bastion host (with strong authentication and monitoring), and from there, access only the specific OT systems they are authorized to manage.

Why the other options are incorrect:

A) Deploy a jump box on the third-party network...:
Placing the jump box on the third-party network (instead of the OT network) exposes it to external risks and may not adequately isolate the OT environment. Using physical delivery (e.g., USB drives) for updates is inefficient, insecure (risk of malware introduction), and not scalable.

C) Enable outbound internet access...
to any destination IP: This violates the "restrict internet access" requirement. Allowing unrestricted internet access from the OT network exposes it to significant threats and is a major security anti-pattern for OT environments.

D) Create a staging environment...
and enable automatic updates: A staging environment for vendors does not necessarily ensure isolation or secure access. Enabling automatic updates on OT workstations is risky because:

It may disrupt critical operations (updates must be tested in OT).

It requires internet access, violating the restriction requirement.

Automatic updates can introduce instability or unvetted changes.

Reference:
This aligns with Domain 1.0: Security Architecture (secure network design for OT/ICS). Using a bastion host and a dedicated update server is a best practice for maintaining OT isolation while enabling controlled access and patch management.

A security engineer wants to reduce the attack surface of a public-facing containerizedapplication Which of the following will best reduce the application's privilege escalationattack surface?

A. Implementing the following commands in the Dockerfile: RUN echo user:x:1000:1000iuser:/home/user:/dew/null > /ete/passwd

B. installing an EDR on the container's host with reporting configured to log to a centralized SIFM and Implementing the following alerting rules TF PBOCESS_USEB=rooC ALERT_TYPE=critical

C. Designing a muiticontainer solution, with one set of containers that runs the mam application, and another set oi containers that perform automatic remediation by replacing compromised containers or disabling compromised accounts

D. Running the container in an isolated network and placing a load balancer in a publicfacing network. Adding the following ACL to the load balancer: PZRKZI HTTES from 0-0.0.0.0/0 pert 443

A.    Implementing the following commands in the Dockerfile: RUN echo user:x:1000:1000iuser:/home/user:/dew/null > /ete/passwd


Explanation:
The goal is to reduce the privilege escalation attack surface for a containerized application. Privilege escalation in containers often occurs when an attacker gains access to a container running as the root user (UID 0) and then exploits vulnerabilities to elevate privileges on the host or within the container.

Why A is Correct:
This Dockerfile command creates a non-root user (user with UID 1000) and sets it as the default user for the container. By running the application as a non-root user, you:

Minimize the impact of compromise:
If an attacker breaches the container, they have limited privileges (non-root) by default.

Reduce privilege escalation risks:
It is harder to escalate to root within the container if the entrypoint or application does not run as root.

This is a foundational Docker security best practice and directly targets the privilege escalation attack surface.

Why the other options are incorrect:

B) Installing an EDR on the host and configuring alerting for root processes:
This is a detective control, not a preventive one. It may alert you when privilege escalation occurs but does nothing to reduce the attack surface or prevent it. Additionally, EDR on the host does not directly protect the container's internal runtime.

C) Designing a multicontainer solution with automatic remediation:
While automatic remediation can help respond to incidents, it is reactive. It does not prevent privilege escalation from occurring in the first place. This approach adds complexity without directly addressing the root cause (running as root).

D) Running the container in an isolated network and using a load balancer with ACLs:
This reduces the network attack surface (e.g., limiting inbound traffic) but does nothing to mitigate privilege escalation within the container itself. An attacker who exploits an application vulnerability (e.g., via HTTPS) can still escalate privileges if the container runs as root.

Reference:
This aligns with Domain 3.0: Security Engineering and Cryptography (container security). The principle of least privilege is critical for securing containers. Running applications as a non-root user is a primary recommendation from Docker and CIS benchmarks to minimize escalation risks.

An incident response team is analyzing malware and observes the following:

• Does not execute in a sandbox

• No network loCs

• No publicly known hash match

• No process injection method detected

Which of the following should the team do next to proceed with further analysis?

A. Use an online vims analysis tool to analyze the sample

B. Check for an anti-virtualization code in the sample

C. Utilize a new deployed machine to run the sample.

D. Search oilier internal sources for a new sample.

C.   Utilize a new deployed machine to run the sample.

Explanation:
The malware analysis has hit a dead end because the sample:

Does not execute in a sandbox:
It may have anti-sandboxing techniques.

No network IOCs:
It might not activate its network capabilities in the analysis environment.

No publicly known hash match:
It is likely a new or unknown variant.

No process injection method detected:
It may be using a novel technique or require specific conditions to trigger.

To proceed, the team needs to observe the malware's behavior in an environment where it will execute fully. A new deployed machine (e.g., a clean, isolated VM or physical system that mimics a real user environment) can bypass anti-sandbox checks and may allow the malware to reveal its true behavior, including network calls, process injection, or other IOCs.

Why the other options are incorrect:

A) Use an online virus analysis tool:
This is redundant. The team already has the sample and likely used similar tools (e.g., VirusTotal) to get the "no publicly known hash match" result. Repeating this won't help.

B) Check for anti-virtualization code in the sample:
While this is a valid step, it is something the team should do before running the sample. Since they already know it doesn't execute in a sandbox, they should now move to an environment that bypasses these checks (like a real machine).

D) Search other internal sources for a new sample:
This might help if the sample is corrupted, but the issue is likely environmental (the malware detects analysis). Finding another copy won't solve the execution problem.

Reference:
This aligns with Domain 2.0: Security Operations (malware analysis). When malware evades automated analysis, analysts must use more advanced techniques, such as running it in a realistic but isolated environment to capture its behavior.

A software engineer is creating a CI/CD pipeline to support the development of a web application The DevSecOps team is required to identify syntax errors Which of the following is the most relevant to the DevSecOps team's task'

A. Static application security testing

B. Software composition analysis

C. Runtime application self-protection

D. Web application vulnerability scanning

A.   Static application security testing

Explanation:
The DevSecOps team's task is to identify syntax errors in the code as part of the CI/CD pipeline.

Why A is Correct:
Static Application Security Testing (SAST) is a white-box testing method that analyzes source code for flaws before the application is compiled or run. It is designed to detect:

Syntax errors (e.g., missing semicolons, incorrect language constructs).

Security vulnerabilities (e.g., SQL injection, buffer overflows).

Coding standard violations.

SAST tools (e.g., SonarQube, Checkmarx) integrate directly into the CI/CD pipeline to scan code as it is committed, making them ideal for catching syntax errors early in the development process.

Why the other options are incorrect:

B) Software composition analysis (SCA):
SCA focuses on identifying vulnerabilities in third-party libraries and dependencies, not syntax errors in the custom code.

C) Runtime application self-protection (RASP):
RASP is a security technology that runs on the server and protects applications during execution (e.g., blocking attacks in real-time). It does not analyze code for syntax errors.

D) Web application vulnerability scanning:
This typically refers to Dynamic Application Security Testing (DAST), which tests running applications for vulnerabilities (e.g., OWASP Top 10). It occurs after deployment and cannot detect syntax errors in the source code.

Reference:
This aligns with Domain 2.0: Security Operations (DevSecOps integration). SAST is the primary tool for identifying syntax errors and security flaws in source code during the CI/CD pipeline, supporting shift-left security practices.

An engineering team determines the cost to mitigate certain risks is higher than the asset values The team must ensure the risks are prioritized appropriately. Which of the following is the best way to address the issue?

A. Data labeling

B. Branch protection

C. Vulnerability assessments

D. Purchasing insurance

D.   Purchasing insurance

Explanation:
This scenario presents a classic risk management decision. When the cost to mitigate a risk (e.g., implementing a technical control, hiring additional staff, purchasing new hardware) exceeds the value of the asset itself, it is financially impractical to mitigate the risk directly. In such cases, the optimal risk response is to transfer the financial burden of the risk to a third party.

Risk Transfer:
Purchasing insurance is the primary method of transferring financial risk. The organization pays a premium to an insurance company. If the risk is realized (e.g., a data breach, system failure, or natural disaster causes loss), the insurance policy covers some or all of the financial damages. This allows the organization to prioritize its resources on mitigating risks where the cost-benefit analysis is favorable, while still managing the high-cost, low-probability risks through financial means.

Prioritization:
By transferring the risk, the team is effectively prioritizing it appropriately. They are acknowledging the risk exists but are choosing the most cost-effective strategy to handle its potential impact, rather than ignoring it or spending excessive resources on it.

Analysis of Incorrect Options:

A. Data labeling:
Data labeling is a data governance and security control. It involves tagging data with classifications (e.g., Public, Confidential, Restricted) to ensure it is handled and protected according to its sensitivity. While this is a crucial security practice, it is a form of risk mitigation or avoidance. If the cost of implementing and maintaining data labeling for this specific asset is already deemed too high, this option does not solve the financial dilemma presented in the question.

B. Branch protection:
Branch protection is a specific feature in version control systems like Git. It enforces workflows for collaborative development by restricting who can push to certain branches, requiring pull requests, and mandating status checks before merging. This is a technical control designed to mitigate risks related to code integrity and security (e.g., introducing vulnerabilities, breaking builds). Like data labeling, it is a form of risk mitigation whose cost may have already been factored into the team's determination that mitigation is too expensive.

C. Vulnerability assessments:
A vulnerability assessment is the process of identifying, classifying, and prioritizing weaknesses in a system. This is a foundational step in risk identification, not risk response. The question states that the risks have already been identified and analyzed ("cost to mitigate... is higher than the asset values"). Conducting another assessment does nothing to address the chosen response to the risk; it only re-discovers the same problem.

Reference:
This concept falls directly under Domain 1.0: Governance, Risk, and Compliance of the CAS-005 exam objectives, specifically focusing on risk assessment, analysis, and response strategies. It aligns with standard risk management frameworks like NIST SP 800-37 (RMF) and ISO 27005, which define the four risk responses:

Avoid:
Eliminate the risk entirely by discontinuing the activity.

Transfer:
Shift the risk to a third party (e.g., insurance).

Mitigate:
Implement controls to reduce the likelihood or impact of the risk.

Accept:
Acknowledge the risk and monitor it without taking action.

A security architect wants to develop a baseline of security configurations These configurations automatically will be utilized machine is created Which of the following technologies should the security architect deploy to accomplish this goal?

A. Short

B. GASB

C. Ansible

D. CMDB

C.   Ansible

Explanation:
The requirement is to automatically apply a security baseline to machines as soon as they are created. This process is a cornerstone of Infrastructure as Code (IaC) and DevSecOps, aiming for consistent, secure, and repeatable deployments.

Ansible is an automation tool used for configuration management, application deployment, and orchestration. It works by defining desired system states in human-readable YAML files called "playbooks." A playbook can contain a set of security baselines (e.g., disabling unnecessary services, configuring firewall rules, applying specific security policies).

Automation:
These playbooks can be triggered automatically by orchestration tools (like Jenkins, GitLab CI/CD) the moment a new machine is provisioned. This ensures that every new system is configured identically and securely from the very beginning, eliminating manual setup errors and "configuration drift."

Analysis of Incorrect Options:

A. Short:
This is not a recognized technology or standard in the context of IT security, configuration management, or compliance. It is likely a distractor.

B. GASB:
This stands for the Governmental Accounting Standards Board. It is an organization that establishes accounting and financial reporting standards for U.S. state and local governments. It is entirely unrelated to technical security configuration or system automation.

D. CMDB:
A Configuration Management Database (CMDB) is a centralized repository that stores information about the hardware and software components (Configuration Items or CIs) within an organization and the relationships between them. Its primary purpose is service management and visibility.

While a CMDB might store information about the baseline configuration of a system, it is not an automation tool. It cannot apply configurations to a newly created machine. It is used for tracking, auditing, and understanding dependencies, not for enforcement.

Reference:
This concept falls under Domain 2.0: Security Architecture and Domain 4.0: Security Operations of the CAS-005 exam objectives. It specifically addresses:

Implementing secure designs across different domains (2.2)

Automating security operations (4.3) through tools like configuration management (Ansible, Puppet, Chef, SaltStack) to ensure consistency and enforce security baselines.

The core concept here is using automation and configuration management tools to enforce state, which is a fundamental principle of modern secure architecture.

An organization wants to create a threat model to identity vulnerabilities in its infrastructure. Which of the following, should be prioritized first?

A. External-facing Infrastructure with known exploited vulnerabilities

B. Internal infrastructure with high-seventy and Known exploited vulnerabilities

C. External facing Infrastructure with a low risk score and no known exploited vulnerabilities

D. External-facing infrastructure with a high risk score that can only be exploited with local access to the resource

A.   External-facing Infrastructure with known exploited vulnerabilities

Explanation:
The core principle of prioritization in vulnerability management and threat modeling is risk. Risk is a function of Threat, Vulnerability, and Impact. When prioritizing remediation efforts, the highest priority should be given to vulnerabilities that are:

Being actively exploited in the wild (High Threat):
The "known exploited" factor means attackers are currently using this vulnerability to compromise systems. This is no longer a theoretical risk; it is an active one.

On externally facing assets (High Impact Potential):
External-facing systems are accessible from the internet, making them low-hanging fruit for a vast number of attackers around the globe. They have a much larger attack surface than internal systems.

Option A combines these two critical factors:
it is external-facing and has known exploited vulnerabilities. This represents an immediate and severe danger to the organization, as it is highly likely to be targeted and successfully breached with minimal effort by an attacker. Therefore, it must be the absolute highest priority in any threat modeling or remediation effort.

Analysis of Incorrect Options:

B. Internal infrastructure with high-severity and Known exploited vulnerabilities:
While this is a high-priority item due to the "known exploited" factor, it is still less urgent than an external-facing system with the same flaw. Internal systems typically reside behind network security controls (firewalls, segmentation), which act as a compensating control and reduce the immediate threat. An attacker would first need to breach the perimeter to reach this system. An external system is the perimeter; breaching it is often the attacker's primary goal.

C. External facing Infrastructure with a low risk score and no known exploited vulnerabilities:
The "low risk score" and "no known exploited vulnerabilities" indicate that, while the system is external, the specific vulnerabilities are not currently a prime target for attackers. It should be remediated according to a standard patch cycle, but it does not demand immediate, emergency attention like Option A.

D. External-facing infrastructure with a high risk score that can only be exploited with local access to the resource:
This is a contradiction. If a vulnerability requires local access to exploit, its risk is dramatically lowered on an external-facing system. For an external attacker to exploit it, they would first need to gain local access through some other means (e.g., exploiting a different vulnerability). This makes it a much more complex attack chain. The "high risk score" is likely based on the CVSS impact metrics, but the local access requirement (often a high CVSS Attack Complexity or Privileges Required metric) is the key mitigating factor that reduces its real-world urgency compared to a remotely exploitable, known-exploited vulnerability.

Reference:
This prioritization aligns directly with guidance from leading cybersecurity authorities:

CISA's Known Exploited Vulnerabilities (KEV) Catalog:
CISA mandates federal agencies to prioritize and remediate vulnerabilities listed on its KEV catalog, emphasizing that these should be the highest priority due to their active exploitation.

NIST SP 800-40 Rev. 4 (Guide to Enterprise Patch Management Technologies):
Recommends prioritizing patches based on the severity of the vulnerability and whether it is being actively exploited.

MITRE ATT&CK Framework:
Understanding common attack vectors shows that externally facing services are the most frequent initial access points for attackers.

This logic falls under Domain 1.0:
Governance, Risk, and Compliance and Domain 3.0: Security Engineering of the CAS-005 exam, focusing on risk analysis, vulnerability management, and secure system design.

A global manufacturing company has an internal application mat is critical to makingproducts This application cannot be updated and must Be available in the production areaA security architect is implementing security for the application. Which of the following bestdescribes the action the architect should take-?

A. Disallow wireless access to the application.

B. Deploy Intrusion detection capabilities using a network tap

C. Create an acceptable use policy for the use of the application

D. Create a separate network for users who need access to the application

D.   Create a separate network for users who need access to the application

Explanation:
The scenario describes a legacy, business-critical system that is fragile ("cannot be updated") and operates in a sensitive industrial environment ("production area"). The core security challenge is protecting a vulnerable asset that cannot be patched.

Network Segmentation is the most effective security control in this situation. By creating a separate, isolated network (often called a VLAN or a purpose-built network segment) exclusively for this application and its users, the architect significantly reduces its attack surface.

Isolation:
This segmented network can be firewalled off from the corporate network, the internet, and other non-essential systems. This prevents threats from spreading to it and contains any potential compromise of the application itself. It ensures that only authorized users and systems can communicate with it, which is crucial for availability.

Compensating Control:
Since the application itself cannot be hardened (updated), security must be implemented around it. Network segmentation acts as a powerful compensating control to mitigate the risk posed by its unpatched vulnerabilities.

Analysis of Incorrect Options:

A. Disallow wireless access to the application.
While this might be a component of the overall strategy (wireless networks can introduce risk), it is not a complete solution. The primary threat to an unpatchable application isn't just wireless; it's any network-based threat from the larger corporate network, such as malware or unauthorized access attempts. This option is too narrow and does not address the broader network isolation requirement.

B. Deploy Intrusion detection capabilities using a network tap.
An Intrusion Detection System (IDS) is a monitoring and alerting tool. While valuable for visibility, it is a passive control. It can tell you an attack is happening or has happened, but it does not prevent the attack from reaching the vulnerable application. For a system that cannot be patched, prevention is far more critical than detection.

C. Create an acceptable use policy for the use of the application.
Policies are important for defining rules and user expectations, but they are administrative controls. They are not technical enforcement mechanisms. A policy cannot technically prevent a malware infection from exploiting an unpatched vulnerability in the application. It relies on users following the rules and is unenforceable against malicious software or attackers.

Reference:
This approach is a fundamental principle of the Defense-in-Depth strategy and aligns with best practices from frameworks like:

NIST Cybersecurity Framework (CSF):
Specifically the "Protect" function (PR.AC-5: Network integrity is protected, incorporating network segregation where appropriate).

NIST SP 800-82:
Guide to Industrial Control Systems (ICS) Security, which heavily emphasizes segmenting critical OT (Operational Technology) networks from corporate IT networks.

MITRE ATT&CK Mitigations:
Network Segmentation (M1030) is a primary mitigation tactic to contain and isolate adversary movement.

This question falls under Domain 3.0: Security Engineering of the CAS-005 exam, focusing on implementing secure infrastructure designs and segmentation strategies to protect critical assets.

Page 6 out of 33 Pages