Think You're Ready?

Your Final Exam Before the Final Exam.
Dare to Take It?

A systems engineer is configuring a system baseline for servers that will provide email services. As part of the architecture design, the engineer needs to improve performance of the systems by using an access vector cache, facilitating mandatory access control and protecting against:

• Unauthorized reading and modification of data and programs

• Bypassing application security mechanisms

• Privilege escalation

• interference with other processes

Which of the following is the most appropriate for the engineer to deploy?

A. SELinux

B. Privileged access management

C. Self-encrypting disks

D. NIPS

A.   SELinux

Explanation:

The requirements specify the need for:

Improving performance using an access vector cache:
This is a feature of Security-Enhanced Linux (SELinux) that caches access decisions to reduce overhead.

Facilitating mandatory access control (MAC):
SELinux implements MAC, which enforces security policies based on labels (e.g., types, roles) beyond traditional discretionary access control (DAC).

Protecting against:

Unauthorized reading/modification of data and programs:
SELinux confines processes to least privilege, preventing unauthorized access.

Bypassing application security mechanisms:
SELinux policies restrict applications to their intended behavior.

Privilege escalation:
SELinux limits the ability of processes to gain higher privileges.

Interference with other processes:
Isolation via SELinux domains prevents processes from affecting each other.

Why SELinux (A) is the most appropriate:

SELinux directly provides all the required features:
MAC, access vector cache (for performance), and protection against the listed threats through its policy enforcement.

Why other options are incorrect:

B) Privileged access management (PAM):
PAM focuses on managing and monitoring privileged accounts (e.g., sudo, admin logins) but does not provide system-wide MAC or an access vector cache.

C) Self-encrypting disks (SED):
SED protects data at rest via encryption but does not enforce process isolation, prevent privilege escalation, or use an access vector cache.

D) Network intrusion prevention system (NIPS):
NIPS monitors network traffic for threats but operates at the network layer, not the system level. It cannot enforce MAC or protect against local process interference.

Reference:
This aligns with Domain 1.0: Security Architecture (system hardening) and Domain 3.0: Security Engineering and Cryptography (access controls). SELinux is a standard for enforcing least privilege and MAC on Linux systems, making it ideal for securing email servers.

Emails that the marketing department is sending to customers are pomp to the customers' spam folders. The security team is investigating the issue and discovers that the certificates used by the email server were reissued, but DNS records had not been updated. Which of the following should the security team update in order to fix this issue? (Select three.)

A. DMARC

B. SPF

C. DKIM

D. DNSSEC

E. SASC

F. SAN

G. SOA

H. MX

A.   DMARC
B.   SPF
C.   DKIM

Explanation:
The issue is that marketing emails are being marked as spam due to a certificate reissue and outdated DNS records. This strongly indicates a problem with email authentication mechanisms that rely on DNS records. The core protocols for email authentication are:

SPF (Sender Policy Framework):
Uses a DNS TXT record to list all IP addresses authorized to send email for a domain. If the email server's IP changed or the record is incorrect, SPF validation will fail.

DKIM (DomainKeys Identified Mail):
Uses a DNS TXT record to publish a public key for verifying an email's digital signature. If the email server's DKIM signing key was reissued (e.g., a new certificate/key pair generated), the corresponding DKIM DNS record must be updated with the new public key. This is the most likely direct cause given the certificate reissue.

DMARC (Domain-based Message Authentication, Reporting, and Conformance):
Uses a DNS TXT record to specify how receivers should handle emails that fail SPF or DKIM (e.g., quarantine or reject). It also relies on the correct configuration of SPF and DKIM. Updating DMARC policies might be necessary if the failure is due to a strict policy (e.g., p=reject).

Why these three?
The certificate reissue likely affected the DKIM signing key. If the DKIM DNS record wasn't updated with the new public key, emails will fail DKIM validation. This, in turn, may cause DMARC failure if the policy requires DKIM alignment. SPF might also need updating if the mail server's IP or hostname changed.

Why the others are incorrect:

D. DNSSEC:
Used to cryptographically sign DNS records for authenticity, but it is not directly related to email authentication. It wouldn't cause emails to go to spam if disabled or misconfigured.

E. SASC:
Not a standard DNS or email protocol (likely a distractor).

F. SAN (Subject Alternative Name):
Part of an X.509 certificate, not a DNS record. The certificate was reissued, but the question focuses on updating DNS records.

G. SOA (Start of Authority):
A DNS record with administrative information about the zone (e.g., primary nameserver, serial number). Updating it wouldn't fix email authentication.

H. MX (Mail Exchanger):
Directs email to the correct mail server. If this was wrong, emails wouldn't be delivered at all (not just to spam).

Reference:
This falls under Domain 3.0: Security Engineering and Cryptography (email security) and Domain 2.0: Security Operations (troubleshooting). Proper configuration of SPF, DKIM, and DMARC in DNS is critical for email deliverability and preventing spam classification.

A developer needs to improve the cryptographic strength of a password-storage component in a web application without completely replacing the crypto-module. Which of the following is the most appropriate technique?

A. Key splitting

B. Key escrow

C. Key rotation

D. Key encryption

E. Key stretching

E.   Key stretching

Explanation:

Why E is Correct:
Key stretching is a technique specifically designed to strengthen weak passwords, such as those entered by users. It works by taking a password and passing it through a computationally intensive algorithm (like PBKDF2, bcrypt, or Argon2) that requires a significant amount of time and resources to compute. This dramatically increases the effort required for an attacker to perform a brute-force or dictionary attack, as each guess must go through the same slow process. This can be implemented on top of the existing hashing mechanism (e.g., moving from a single SHA-256 hash to PBKDF2 with SHA-256 and a high iteration count) without necessarily replacing the entire underlying cryptographic module.

Why A is Incorrect:
Key splitting involves dividing a cryptographic key into multiple parts (shards) that are distributed to different entities. This is used for securing keys and enforcing control, not for strengthening the cryptographic process of password derivation.

Why B is Incorrect:
Key escrow is the process of depositing a cryptographic key with a trusted third party to be stored for emergency access (e.g., by law enforcement). This is a governance and recovery mechanism, not a technique for improving cryptographic strength.

Why C is Incorrect:
Key rotation is the practice of retiring an encryption key and replacing it with a new one at regular intervals. This is a vital practice for limiting the blast radius of a potential key compromise but does not inherently make the algorithm used to derive a key from a password any stronger. The password-to-key process could still be weak and vulnerable to attack.

Why D is Incorrect:
Key encryption (or key wrapping) is the process of encrypting one key with another key. This is used for secure key storage and transmission. While the stored password hashes should be encrypted at rest, this is a separate control. The core weakness of simple password hashing is the speed of the hashing operation, which key encryption does not address.

Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It specifically addresses cryptographic techniques and their appropriate application, focusing on secure password storage mechanisms as outlined in best practices and standards like NIST SP 800-63B.

A security engineer performed a code scan that resulted in many false positives. The security engineer must find a solution that improves the quality of scanning results before application deployment. Which of the following is the best solution?

A. Limiting the tool to a specific coding language and tuning the rule set

B. Configuring branch protection rules and dependency checks

C. Using an application vulnerability scanner to identify coding flaws in production

D. Performing updates on code libraries before code development

A.   Limiting the tool to a specific coding language and tuning the rule set

Explanation:

Why A is Correct:
This is the most direct and effective solution to the specific problem of "many false positives" from a code scan. Static Application Security Testing (SAST) tools are notorious for generating false positives, which can overwhelm developers and lead to real issues being ignored.

Limiting to a specific language:
SAST tools perform best when they are optimized for a particular language's syntax and common pitfalls. Running a tool configured for multiple languages against a codebase written primarily in one language can trigger irrelevant rules and generate false positives.

Tuning the rule set:
This is the critical step for reducing false positives. It involves customizing the tool's rules to match the application's specific framework, libraries, and architecture. This can include:

Disabling rules that are not relevant to the project.

Adjusting the severity of certain findings.

Creating custom rules to ignore known benign patterns specific to the codebase.

Providing the tool with paths to custom libraries so it can accurately track data flow.

Tuning transforms a generic scanner into a precise tool tailored to the environment, dramatically improving the signal-to-noise ratio.

Why B is Incorrect:
Configuring branch protection rules (e.g., requiring pull requests and approvals before merging) and dependency checks (SCA - Software Composition Analysis) are excellent DevOps security practices. However, they address different problems. Branch protection enforces process, and dependency checks find vulnerabilities in third-party libraries. Neither practice directly reduces the false positive rate of a SAST tool scanning custom code for flaws.

Why C is Incorrect:
Using an application vulnerability scanner (DAST - Dynamic Application Security Testing) in production is a reactive measure. It finds vulnerabilities in a running application after it has been deployed. The question is about improving the scan results before deployment. Furthermore, running a DAST tool does not fix the root cause of the poor results from the SAST (code scan) tool; it simply uses a different, later-stage tool to find a different class of issues.

Why D is Incorrect:
Updating code libraries is a crucial maintenance activity for patching known vulnerabilities in dependencies (addressed by SCA tools). However, it has no bearing on the accuracy of a SAST tool scanning the company's own custom code for logical flaws and coding errors. The false positives are generated by the tool's analysis of the code structure, not by the version of the libraries used during development.

Reference:
This question falls under Domain 2.0: Security Operations, specifically concerning security testing in the development lifecycle and the integration and management of tools like SAST to improve software security. It also touches on the analytical skill of selecting the correct mitigation for a given problem.

Audit findings indicate several user endpoints are not utilizing full disk encryption During me remediation process, a compliance analyst reviews the testing details for the endpoints and notes the endpoint device configuration does not support full disk encryption Which of the following is the most likely reason me device must be replaced'

A. The HSM is outdated and no longer supported by the manufacturer

B. The vTPM was not properly initialized and is corrupt.

C. The HSM is vulnerable to common exploits and a firmware upgrade is needed

D. The motherboard was not configured with a TPM from the OEM supplier.

E. The HSM does not support sealing storage

D.   The motherboard was not configured with a TPM from the OEM supplier.

Explanation:

Why D is Correct:
Full disk encryption (FDE) solutions like BitLocker (Windows) or FileVault (macOS) have a strict hardware requirement: a Trusted Platform Module (TPM). A TPM is a dedicated cryptographic processor chip soldered onto the computer's motherboard.

If the audit finding states that the device configuration "does not support full disk encryption," the most fundamental and common reason is that the motherboard lacks this specific hardware component entirely.
Older computers or some very low-cost models were manufactured and sold without a TPM chip. Since the TPM is a physical hardware requirement, it cannot be added via software. The only remediation for such a device is to replace it with hardware that meets the compliance requirement (i.e., a motherboard with a TPM).

Why A, C, and E are Incorrect (HSM):
These options incorrectly refer to an HSM (Hardware Security Module). An HSM is a high-performance, external, or PCIe-based network device used to manage and protect cryptographic keys for servers, certificate authorities, and critical infrastructure. HSMs are not used for standard endpoint full-disk encryption. Endpoints use a TPM, which is a much smaller, cheaper, and less powerful cryptographic co-processor designed specifically for this purpose. Confusing TPM and HSM is a common distractor in exam questions.

Why B is Incorrect (vTPM):
A vTPM (virtual TPM) is a software-based implementation of a TPM used in virtual machines to provide the same functionality. The question is about physical "user endpoints" (e.g., laptops, desktops). A vTPM is not relevant to the physical hardware of an endpoint device. Furthermore, if a vTPM were corrupt, it could potentially be re-initialized or re-provisioned through software or hypervisor management, not necessarily requiring a full hardware replacement.

Reference:
This question falls under Domain 1.0: Security Architecture and Domain 4.0: Governance, Risk, and Compliance. It tests knowledge of hardware security capabilities (TPM vs. HSM) and the practical implications of enforcing compliance policies that have specific hardware requirements. Understanding the "why" behind a control is crucial for a CASP+.

Which of the following AI concerns is most adequately addressed by input sanitation?

A. Model inversion

B. Prompt Injection

C. Data poisoning

D. Non-explainable model

B.   Prompt Injection

Explanation:

Why B is Correct:
Prompt injection is a vulnerability specific to AI systems that use text-based prompts, particularly Large Language Models (LLMs). It occurs when an attacker crafts a malicious input (a "prompt") that tricks the model into ignoring its original instructions, bypassing safety filters, or revealing sensitive information. Input sanitation is a primary defense against this attack. It involves rigorously validating, filtering, and escaping all user-provided input before it is passed to the AI model. This helps to neutralize or render ineffective any malicious instructions embedded within the user's input, thereby preventing the model from being hijacked.

Why A is Incorrect:
Model inversion is an attack where an adversary uses the model's outputs (e.g., API responses) to reverse-engineer and infer sensitive details about the training data. This is addressed by controls on the output side (e.g., differential privacy, output filtering, limiting API response details) and model design, not by sanitizing the input prompts.

Why C is Incorrect:
Data poisoning is an attack on the training phase of an AI model. An attacker injects malicious or corrupted data into the training set to compromise the model's performance, integrity, or behavior after deployment. Defending against this requires securing the data collection and curation pipeline, using robust training techniques, and validating training data—measures that are completely separate from sanitizing runtime user input.

Why D is Incorrect:
A non-explainable model (often called a "black box" model) is a characteristic of certain complex AI algorithms where it is difficult for humans to understand why a specific decision was made. This is an inherent challenge of the model's architecture (e.g., deep neural networks) and is addressed by the field of Explainable AI (XAI), which involves using different models, tools, and techniques to interpret them. Input sanitation has no bearing on making a model's decisions more explainable.

Reference:
This question falls under the intersection of Domain 1.0: Security Architecture and emerging technologies. It tests the understanding of specific threats to AI systems and the appropriate security controls to mitigate them. Input validation/sanitation is a classic application security control that finds a new critical application in protecting AI systems from prompt injection attacks.

A security architect for a global organization with a distributed workforce recently received funding lo deploy a CASB solution Which of the following most likely explains the choice to use a proxy-based CASB?

A. The capability to block unapproved applications and services is possible

B. Privacy compliance obligations are bypassed when using a user-based deployment.

C. Protecting and regularly rotating API secret keys requires a significant time commitment

D. Corporate devices cannot receive certificates when not connected to on-premises devices

A.   The capability to block unapproved applications and services is possible

Explanation:
A Cloud Access Security Broker (CASB) is a security policy enforcement point that sits between users and cloud service providers. There are two primary deployment modes: API-based and proxy-based.

Why A is Correct:
A proxy-based CASB operates in-line, intercepting traffic in real-time between the user and the cloud application. This allows it to enforce granular access controls and policies immediately. Specifically, it can:

Block unapproved applications and services in real-time by denying connections to unauthorized cloud services.

Inspect and control data transfers (e.g., prevent uploads to personal cloud storage).

Enforce encryption and data loss prevention (DLP) policies on the fly.

This real-time blocking capability is a key advantage of proxy-based CASBs over API-based solutions, which are more focused on post-hoc monitoring and remediation.

Why B is Incorrect:
Privacy compliance obligations (e.g., GDPR, CCPA) are never "bypassed" by any deployment model. In fact, a CASB helps enforce compliance. User-based deployments (e.g., forward proxy) still must comply with privacy laws, and the deployment choice does not negate these obligations.

Why C is Incorrect:
While managing API keys for an API-based CASB can be administratively burdensome, this is not the primary reason for choosing a proxy-based CASB. The key differentiator is the need for real-time enforcement (like blocking) rather than just visibility and retrospective controls.

Why D is Incorrect:
Certificates for authentication (e.g., for SSL inspection) can be deployed to corporate devices remotely using mobile device management (MDM) or similar tools, regardless of whether they are connected on-premises. This is not a significant barrier and is not the main driver for selecting a proxy-based CASB.

Reference:
This question falls under Domain 1.0: Security Architecture. It tests the understanding of CASB deployment modes and their respective strengths. Proxy-based CASBs are chosen when real-time control and blocking are required, which aligns with the need to enforce policies for a distributed workforce accessing cloud services.

A security engineer is given the following requirements:

• An endpoint must only execute Internally signed applications

• Administrator accounts cannot install unauthorized software.

• Attempts to run unauthorized software must be logged

Which of the following best meets these requirements?

A. Maintaining appropriate account access through directory management and controls

B. Implementing a CSPM platform to monitor updates being pushed to applications

C. Deploying an EDR solution to monitor and respond to software installation attempts

D. Configuring application control with blocked hashes and enterprise-trusted root certificates

D.   Configuring application control with blocked hashes and enterprise-trusted root certificates

Explanation:

The requirements are:

Only execute internally signed applications:
This requires whitelisting based on code signing.

Prevent administrator accounts from installing unauthorized software: This requires enforcement that overrides even admin privileges.

Log attempts to run unauthorized software:
This requires detailed auditing of execution attempts.

Option D best meets all these requirements:
Application control (e.g., Windows AppLocker or SRP) can be configured to:

Allow only applications signed with enterprise-trusted root certificates (e.g., your organization's internal code signing certificate). This ensures only internally signed software runs.

Block hashes of specific unauthorized applications if needed.

Enforce policies that apply to all users, including administrators, preventing them from running unauthorized installers or executables.

Log all attempts to execute blocked software for auditing and alerting.

Why the other options are incorrect:

A) Maintaining account access through directory management:
While directory controls (e.g., limiting admin privileges) can help, they are not foolproof. Administrators may still have privileges, and this approach does not directly enforce code signing or log execution attempts.

B) Implementing a CSPM (Cloud Security Posture Management):
CSPM is for securing cloud infrastructure (e.g., misconfigurations in AWS/Azure). It does not control endpoint software execution or logging.

C) Deploying an EDR (Endpoint Detection and Response):
EDR is great for monitoring and responding to threats, but it is primarily detective rather than preventive. It might log installation attempts but cannot inherently prevent administrators from running unauthorized software or enforce code signing policies. Application control (option D) is the preventive measure.

Reference:
This aligns with Domain 1.0: Security Architecture (endpoint security) and Domain 2.0: Security Operations (policy enforcement). Application control with code signing is a best practice for locking down endpoints and meeting strict compliance requirements.

A security analyst discovered requests associated with IP addresses known for born legitimate 3nd bot-related traffic. Which of the following should the analyst use to determine whether the requests are malicious?

A. User-agent string

B. Byte length of the request

C. Web application headers

D. HTML encoding field

A.   User-agent string

Explanation:
The security analyst has identified requests from IP addresses known for bot or illegitimate traffic. To determine if these requests are malicious, the analyst needs to inspect elements that can reveal the nature of the client making the request.

Why A is Correct:
The User-Agent string is a header in HTTP requests that identifies the client software (browser, bot, script, etc.) making the request. Malicious bots often use: Generic or spoofed User-Agent strings (e.g., "python-requests/2.28.1" for a script).

Outdated browsers (indicating automation).

Strings known to be associated with scraping tools or vulnerability scanners.

By analyzing the User-Agent, the analyst can distinguish between legitimate traffic (e.g., known browsers) and malicious automation (e.g., bots, scanners).

Why the other options are less effective:

B) Byte length of the request:
While unusual request lengths might indicate anomalies (e.g., buffer overflow attempts), they are not a reliable indicator of bot traffic. Legitimate requests can vary in length, and malicious requests might mimic normal sizes.

C) Web application headers:
Headers like Accept-Language or Referer can be manipulated by bots and are less definitive than the User-Agent for identifying automation.

D) HTML encoding field:
HTML encoding (e.g., Content-Encoding) relates to how data is formatted for transmission and is not typically used to distinguish malicious bots. It is more relevant for data processing than threat detection.

Reference:
This falls under Domain 2.0: Security Operations (threat detection). Analyzing User-Agent strings is a common technique for identifying bot traffic and automated attacks in web logs.

A security team is responding to malicious activity and needs to determine the scope of impact the malicious activity appears to affect certain version of an application used by the organization Which of the following actions best enables the team to determine the scope of Impact?

A. Performing a port scan

B. Inspecting egress network traffic

C. Reviewing the asset inventory

D. Analyzing user behavior

C.   Reviewing the asset inventory

Explanation:
The security team knows that the malicious activity affects certain versions of an application. To determine the scope of impact, they need to quickly identify all systems within the organization that are running those vulnerable versions.

Why C is Correct:
A comprehensive and accurate asset inventory is a centralized database that tracks:

All hardware and software assets in the organization.

Software versions installed on each system.

Ownership and location of assets.

By querying the asset inventory, the team can instantly generate a list of all devices running the affected application versions. This directly answers the question: "Where is this vulnerable software deployed, and how many systems are at risk?"

Why the other options are incorrect:

A) Performing a port scan:
A port scan identifies open ports and services on network devices. It might reveal that a service is running, but it cannot reliably determine the specific version of an application (especially for custom or non-standard services). It is too slow and imprecise for this task.

B) Inspecting egress network traffic:
This helps identify data exfiltration or command-and-control communication from already compromised systems. It is useful for understanding what an attacker is doing but does not help in proactively identifying all potentially vulnerable systems that might not yet be compromised.

D) Analyzing user behavior:
This is used to detect anomalies like insider threats or compromised accounts. It does not help in mapping the deployment of a specific vulnerable application version across the enterprise.

Reference:
This aligns with Domain 2.0: Security Operations (incident response) and Domain 4.0: Governance, Risk, and Compliance (asset management). During an incident, an accurate asset inventory is critical for impact assessment and containment. Tools like CMDBs (Configuration Management Databases) are essential for this purpose.

Page 5 out of 33 Pages