CompTIA CAS-005 Practice Test

Prepare smarter and boost your chances of success with our CompTIA CAS-005 Practice test. This test helps you assess your knowledge, pinpoint strengths, and target areas for improvement. Surveys and user data from multiple platforms show that individuals who use CAS-005 practice exam are 40–50% more likely to pass on their first attempt.

Start practicing today and take the fast track to becoming CompTIA CAS-005 certified.

11030 already prepared
Updated On : 11-Sep-2025
103 Questions
4.8/5.0

Page 1 out of 11 Pages

Asecuntv administrator is performing a gap assessment against a specific OS benchmark The benchmark requires the following configurations be applied to endpomts:

• Full disk encryption

* Host-based firewall

• Time synchronization

* Password policies

• Application allow listing

* Zero Trust application access

Which of the following solutions best addresses the requirements? (Select two).

A. CASB

B. SBoM

C. SCAP

D. SASE

E. HIDS

C.   SCAP
D.   SASE

Explanation:
The question asks for solutions that can help an administrator apply and check for a wide range of specific endpoint configuration requirements. The correct answers are the two technologies designed to automate and enforce these types of technical security controls.

C. SCAP (Security Content Automation Protocol):

Explanation:
SCAP is a suite of specifications that standardize how security software products communicate information about software flaws and security configurations. It is specifically designed for automating compliance checking against benchmarks (like the one mentioned in the question: CIS Benchmarks, DISA STIGs, etc.). An administrator can use an SCAP-compliant tool to scan all endpoints and automatically verify their compliance with each requirement (disk encryption, firewall status, password policies, etc.). It directly addresses the "gap assessment" part of the task.

Reference:
SCAP falls under the ITF+ objective of Security, specifically related to compliance and secure configuration management.

D. SASE (Secure Access Service Edge):

Explanation:
SASE is a cloud-based security model that combines comprehensive network security functions (like FWaaS and CASB) with wide-area networking (SD-WAN). It directly addresses several requirements from the list:

Zero Trust application access:
This is a core principle of SASE. It verifies user and device identity and context before granting access to applications, regardless of network location.

Host-based firewall:
While SASE provides a cloud-delivered firewall, it can enforce policies that protect the endpoint.

Application allow listing:
SASE can control which cloud and web applications users are allowed to access.

Reference:
SASE is a modern security framework that integrates multiple technologies to secure access in a cloud-centric world.

Why Not the Others:

A. CASB (Cloud Access Security Broker):

Why it's incorrect:
A CASB is a security policy enforcement point that sits between cloud service users and cloud applications. It is excellent for securing cloud app usage (data loss prevention, cloud application discovery) but is not designed to enforce OS-level configurations like full disk encryption, host-based firewalls, or time synchronization on an endpoint itself. It is a component that can be part of a SASE solution.

B. SBOM (Software Bill of Materials):

Why it's incorrect:
An SBOM is a nested inventory of all components, libraries, and dependencies used in a software application. It is used for software supply chain security, vulnerability management, and license compliance. It has no functionality for deploying or checking the security configurations of an operating system on an endpoint.

E. HIDS (Host-Based Intrusion Detection System):

Why it's incorrect:
A HIDS monitors a single host for malicious activity by analyzing system calls, log files, and other host-specific data. While it can detect changes or attacks based on configurations, its primary purpose is detection, not enforcement or assessment. It does not apply disk encryption, set password policies, or configure time synchronization. An SCAP scanner would be used to check if those settings are correct.

A company wants to install a three-tier approach to separate the web. database, and application servers A security administrator must harden the environment which of the following is the best solution?

A. Deploying a VPN to prevent remote locations from accessing server VLANs

B. Configuring a SASb solution to restrict users to server communication

C. Implementing microsegmentation on the server VLANs

D. installing a firewall and making it the network core

C.   Implementing microsegmentation on the server VLANs

Explanation:
The core requirement is to "harden" a segmented, three-tier architecture (web, app, database). Hardening means reducing the attack surface by restricting unnecessary access.

Microsegmentation is a security technique that creates secure zones within a data center or cloud deployment. It allows you to isolate workloads (like these servers) from one another and secure them individually.

In a three-tier model, the principle of least privilege is critical. For example:

Web servers should only need to communicate with application servers on specific ports.

Application servers should only need to communicate with database servers on specific ports.

Database servers should not be directly accessible from the web.

Microsegmentation enforces this by applying fine-grained security policies (e.g., firewall rules) between each server, even though they might be on the same VLAN or network segment. This containment prevents a breach in one tier (e.g., the web server) from easily spreading to the others (e.g., the database).

Reference:
This aligns with the core security concepts of least privilege and network segmentation, which are fundamental to securing complex architectures.

Why Not the Others:

A. Deploying a VPN to prevent remote locations from accessing server VLANs

Why it's incorrect:
This solution only addresses external access from remote locations. It does nothing to control traffic between the servers themselves within the data center. If an attacker compromises the web server, a VPN does not prevent them from attacking the database server.

B. Configuring a SASE solution to restrict users to server communication

Why it's incorrect:
SASE (Secure Access Service Edge) is primarily focused on securing remote user access to applications and services, typically from outside the corporate network. It is not the right tool for hardening and controlling east-west traffic (server-to-server communication) within a data center environment.

D. Installing a firewall and making it the network core

Why it's incorrect:
While a traditional network firewall at the core can filter north-south traffic (traffic entering and leaving the network), it is not well-suited for managing east-west traffic between servers. It lacks the granularity and agility to create the fine-grained policies needed between specific servers in a three-tier application. Microsegmentation is a more modern and effective approach for this internal security requirement.

A company detects suspicious activity associated with external connections Security detection tools are unable to categorize this activity. Which of the following is the best solution to help the company overcome this challenge?

A. Implement an Interactive honeypot

B. Map network traffic to known loCs

C. Monitor the dark web

D. implement UEBA

D.   implement UEBA

Explanation:
The company's core challenge is a classic example of a modern threat detection gap. Traditional security tools—like Intrusion Detection Systems (IDS), antivirus, and SIEMs that rely solely on signatures—are failing. They operate by comparing network traffic and system activity against a database of known threats, called Indicators of Compromise (IOCs).

When these tools "are unable to categorize the activity," it strongly implies that the suspicious behavior does not match any known signature or pattern in their databases. This is characteristic of advanced threats, such as:

Zero-day exploits:
Attacks that exploit a previously unknown vulnerability.

Polymorphic malware:
Malware that changes its code to avoid signature detection.

Advanced Persistent Threats (APTs):
Stealthy, continuous hacking processes often orchestrated by nation-states or highly skilled actors who use custom, unique tools.

Insider threats:
Malicious activity by a legitimate user, which wouldn't have a known malicious signature.

UEBA is the optimal solution to this problem because it operates on a fundamentally different principle: behavioral analytics.

Instead of asking "Does this activity match a known bad pattern?" (signature-based), UEBA asks "Does this activity deviate from this user's or system's normal, established behavior?" (anomaly-based).

Here’s how UEBA works and why it directly addresses the challenge:

Establishes a Baseline:
UEBA systems use machine learning and statistical analysis to observe the network over a period of time (e.g., 30-90 days). They learn the typical "patterns of life" for every user, server, and network device. This includes:

For users:
Normal login times, locations, devices used, data access patterns, and typical network destinations.

For servers:
Normal processes, expected network connections, and typical traffic volume.

Detects Anomalies:
Once the baseline is established, UEBA continuously monitors activity. It flags actions that statistically deviate from the norm. For example:

A user account accessing a sensitive database at 3 AM for the first time ever.

An application server initiating connections to an unknown external IP address in a foreign country.

A massive volume of data being exfiltrated from a database that normally has only small, internal queries.

Focuses on the "Unknown":
Since UEBA does not rely on a pre-existing list of known-bad indicators, it is exceptionally well-suited for identifying the very types of novel, sophisticated, and uncategorized threats described in the question. It can detect the subtle, suspicious activity that signature-based tools miss.

Provides Context and Categorization:
A key strength of UEBA is its ability to correlate low-risk events from multiple sources into a high-fidelity alert. A single failed login might be nothing. A failed login, followed minutes later by a successful login from a different country, combined with unusual file access, is a high-confidence indicator of compromise. This provides the categorization that the company's current tools lack.

In summary, the company is facing an unknown, behavioral threat. UEBA is specifically designed to analyze behavior, establish a baseline of normalcy, and flag deviations, making it the best solution to overcome the challenge of categorizing and detecting advanced, suspicious activity.

Detailed Analysis of Other Options (Why They Are Incorrect):

A. Implement an Interactive Honeypot

Explanation:
An interactive honeypot is a decoy system designed to mimic a real production asset (like a server or network) to attract and engage attackers. Its primary purposes are:

Divertance:
To lure attackers away from valuable real assets.

Intelligence Gathering:
To study an attacker's methods, tools, and motivations in a controlled environment.

Why it's not the best solution:
A honeypot is a proactive and research-focused tool. It might help you learn about new attack techniques in the future, but it does nothing to analyze, categorize, or respond to the existing suspicious activity that is already happening on the live network. It is not a detection mechanism for in-progress attacks on production systems.

B. Map Network Traffic to Known IOCs

Explanation:
IOCs (Indicators of Compromise) are forensic artifacts from known cyberattacks. They are essentially the "fingerprints" of a threat. Common IOCs include malicious IP addresses, file hashes (MD5, SHA-256), domain names, and known malicious URL patterns.

Why it's not the best solution:
This is precisely what the company's current security tools are already doing—and failing at. The question explicitly states that these tools "are unable to categorize this activity." This means the activity does not match any known IOCs. Therefore, attempting to map the traffic to IOCs again would be futile and would yield the same negative result. This approach is ineffective against novel, unknown threats.

C. Monitor the Dark Web

Explanation:
The dark web is a part of the internet that requires specific software to access and is often used for anonymous, and sometimes illicit, activities. Companies monitor it for threats, such as:

Finding their stolen corporate data or intellectual property for sale.

Discovering discussions or plans to attack their organization.

Identifying leaked employee credentials.

Why it's not the best solution:
While dark web monitoring is a valuable component of threat intelligence, it is a strategic, long-term activity. It is not a tactical solution for investigating active, ongoing suspicious network traffic. The intelligence gathered might be useful for future defense planning or post-breach analysis, but it will not help in real-time to categorize the specific uncategorized connections happening at that moment. It is too slow and indirect for this immediate challenge.

Conclusion:
The company needs a tool that moves beyond outdated signature-based detection. UEBA represents this modern, analytics-driven approach, making it the only choice capable of understanding and categorizing the sophisticated, unknown threat described in the scenario.

Users must accept the terms presented in a captive petal when connecting to a guest network. Recently, users have reported that they are unable to access the Internet after joining the network A network engineer observes the following:

• Users should be redirected to the captive portal.

• The Motive portal runs Tl. S 1 2

• Newer browser versions encounter security errors that cannot be bypassed

• Certain websites cause unexpected re directs

Which of the following mow likely explains this behavior?

A. The TLS ciphers supported by the captive portal ate deprecated

B. Employment of the HSTS setting is proliferating rapidly.

C. Allowed traffic rules are causing the NIPS to drop legitimate traffic

D. An attacker is redirecting supplicants to an evil twin WLAN.

B.   Employment of the HSTS setting is proliferating rapidly.

Explanation:
To understand why this is the correct answer, we need to break down the problem and the role of HSTS.

1. The Mechanics of a Captive Portal:
A captive portal works by intercepting a user's initial web request (usually an HTTP request) and redirecting it to the portal's login page. This interception is a form of a Man-in-The-Middle (MiTM) attack, but it's a benevolent one performed by the network for a specific purpose. Traditionally, this was done using HTTP because it's easy to intercept and redirect.

2. The Rise of HTTPS and HSTS:
To make the web more secure, there has been a massive push towards encrypting all web traffic using HTTPS (HTTP over TLS). A critical security mechanism that enforces this is HSTS (HTTP Strict Transport Security).

What HSTS Does:
When a website uses HSTS, it tells a user's browser: "For a specified period of time, you may only connect to me using HTTPS. Never use HTTP."

How it Works:
The first time a user visits example.com over HTTPS, the server sends an HSTS header. The browser stores this instruction (this is called the "HSTS preload list" for major sites). On all subsequent visits, even if the user types http://example.com, the browser will automatically change the request to https://example.com before sending it. This prevents downgrade attacks.

3. The Conflict with Captive Portals:

The problem described in the question arises directly from this conflict:
A user joins the guest Wi-Fi and tries to visit a common website like google.com.

The user's browser has HSTS preloaded for Google (and thousands of other sites). It automatically converts http://google.com to https://google.com.

The user's device sends an encrypted HTTPS request to Google.

The network's captive portal, which sits between the user and the internet, tries to intercept this request to redirect it to the portal page.

However, it cannot decrypt the HTTPS traffic because it does not possess the private key for google.com. Any attempt to interfere with this encrypted connection (e.g., by trying to redirect it or present a different certificate) is correctly flagged by the browser as a major security violation—a "man-in-the-middle attack."

The browser, obeying the HSTS policy, blocks the user from proceeding to the captive portal. This results in a hard, "cannot be bypassed" security error. The user is stuck.

4. Why This Matches All the Observed Symptoms:

"Newer browser versions encounter security errors that cannot be bypassed":
Newer browsers have more extensive HSTS preload lists and are stricter about enforcing HTTPS and TLS policies.

"Certain websites cause unexpected redirects":
This likely refers to sites that do not use HSTS. Their HTTP traffic can still be intercepted and redirected to the captive portal successfully. This creates an inconsistent experience for the user: some sites (non-HSTS) trigger the portal, while others (HSTS) cause a hard error.

"The captive portal runs TLS 1.2":
This is a red herring. While TLS 1.2 is still widely supported and secure, the core issue is not the TLS version but the fundamental conflict between HSTS and portal interception.

Conclusion for Option B:
The rapid proliferation of HSTS across the web is the root cause. More and more websites are implementing HSTS and are included in browser preload lists, making the traditional HTTP-intercepting captive portal method increasingly broken and unreliable. This explains all the specific symptoms described.

Detailed Analysis of Other Options (Why They Are Incorrect):

A. The TLS ciphers supported by the captive portal are deprecated.

Why it's incorrect:
While deprecated ciphers can cause TLS handshake failures and errors, the symptoms described don't align perfectly. A cipher mismatch would likely prevent the captive portal page from loading at all, for all users, regardless of the website they try to visit. It wouldn't explain why the problem is specific to "newer browser versions" and "certain websites." Newer browsers might reject bad ciphers, but the primary issue described is a security error about the connection being intercepted, not an error about a failed cipher negotiation.

C. Allowed traffic rules are causing the NIPS to drop legitimate traffic.

Why it's incorrect:
A Network-based Intrusion Prevention System (NIPS) filters traffic for malicious patterns. If misconfigured, it could block traffic. However, this would not generate the specific browser-based security errors mentioned. The errors would likely be generic timeouts or "connection reset" messages, not TLS/HTTPS interception warnings. Furthermore, the problem's correlation with specific websites and browser versions points to an application-layer protocol issue (HTTP/HTTPS), not a network-layer block.

D. An attacker is redirecting supplicants to an evil twin WLAN.

Why it's incorrect:
An evil twin WLAN is a malicious rogue access point set up to mimic a legitimate one. While it could indeed cause redirects and security errors, it is not the "most likely" explanation in this context.

The engineer's observation that "Users should be redirected to the captive portal" implies they are on the correct, intended network.

The symptoms are widespread and consistent with known technological changes (HSTS), not the isolated, potentially fraudulent activity of an evil twin.

An evil twin would not cause problems that are specifically tied to "newer browser versions" and "certain websites" in the way HSTS does. Its effect would be more universal for all users connecting to the fake network.

Final Conclusion:
The evidence overwhelmingly points to a fundamental architectural conflict between web security (HSTS) and the traditional method of operating captive portals. The proliferation of HSTS is a well-documented industry-wide issue that network administrators must now address by implementing newer, more secure captive portal methods that work with HTTPS instead of breaking it. Therefore, Option B is the most logical and likely explanation.

A security review revealed that not all of the client proxy traffic is being captured. Which of the following architectural changes best enables the capture of traffic for analysis?

A. Adding an additional proxy server to each segmented VLAN

B. Setting up a reverse proxy for client logging at the gateway

C. Configuring a span port on the perimeter firewall to ingest logs

D. Enabling client device logging and system event auditing

C.   Configuring a span port on the perimeter firewall to ingest logs

Explanation:
The core problem is that the existing proxy infrastructure is missing some client traffic. This means the chosen solution must be able to see all network traffic regardless of whether clients are correctly configured to use the proxy or not. The solution must be passive and universal.

A SPAN port (Switched Port Analyzer), also known as port mirroring, is the ideal architectural solution for this challenge. Here's why:

What it is:
A SPAN port is a configured port on a network switch or firewall that copies all network traffic passing through a specific source port or VLAN and sends it to a destination port (where an analysis tool is connected).

How it Solves the Problem:

Captures All Traffic:
By mirroring the traffic from the port connected to the perimeter firewall (or the entire client VLAN), the SPAN port captures every packet that enters or leaves the network segment. This includes:

Traffic that is correctly sent to the forward proxy.

Traffic that bypasses the proxy (e.g., if a client is misconfigured, uses a VPN, or is infected with malware that communicates directly).

Non-HTTP/S traffic that a proxy might not handle.

Passive and Transparent:
The SPAN port operates invisibly. It does not interfere with the flow of network traffic, add latency, or require any configuration changes on client devices. It simply creates a perfect copy for analysis.

Feeds Analysis Tools:
The copied traffic from the SPAN port can be ingested by a dedicated packet capture appliance, a network detection and response (NDR) system, or a security information and event management (SIEM) system for deep analysis, thus fulfilling the requirement to "capture traffic for analysis."

Why it's the "Architectural Change":
Configuring a SPAN port is a change to the network's infrastructure itself—it's a core function of the network hardware. It provides a holistic, network-level view that application-level solutions like proxies cannot.

In summary, a SPAN port acts as a comprehensive net, catching all traffic for analysis and eliminating the blind spots created by relying solely on clients to correctly send their traffic to a proxy.

Detailed Analysis of Other Options (Why They Are Incorrect):

A. Adding an additional proxy server to each segmented VLAN

Why it's incorrect:
This solution still relies on the same flawed principle: it requires client devices to be explicitly configured to use the proxy. If traffic is not being captured now, adding more proxies does not solve the root cause. Malicious actors, misconfigured devices, or software using direct connections will continue to bypass any proxy server. This adds complexity and cost without solving the visibility gap. It addresses the symptom (proxy capacity/placement) but not the disease (incomplete traffic capture).

B. Setting up a reverse proxy for client logging at the gateway

Why it's incorrect:
A reverse proxy sits in front of servers (e.g., web servers, application servers) to manage incoming traffic from clients. Its primary functions are load balancing, SSL termination, and protecting backend servers. It is not designed to capture or log outbound traffic originating from internal clients. A reverse proxy has zero visibility into the traffic being discussed—the traffic leaving the company from employee workstations. This solution is architecturally backwards for the problem described.

D. Enabling client device logging and system event auditing

Why it's incorrect:
While enabling logging on endpoints is a crucial security practice, it is not the best solution for this specific problem of capturing network traffic.

Scale and Management:
Collecting and analyzing logs from every single client device is immensely complex, resource-intensive, and creates a massive data management problem.

Different Data Type:
Client logs show what the operating system and applications did (process execution, registry changes, file access). They do not provide the full, raw packet data of network traffic needed for deep analysis of communication patterns, protocols, and payloads.

Easily Bypassed:
Malware or a sophisticated user can often disable local logging or delete logs, obscuring their activity. A network-based SPAN port is outside their control and provides an immutable record.

Conclusion:
The question identifies a gap in visibility due to the inherent limitation of forward proxies: they only see traffic sent to them. The only way to guarantee complete capture of all north-south traffic (traffic leaving the network) for analysis is to implement a network-level solution that is independent of client configuration. Configuring a SPAN port on a key network chokepoint, like the perimeter firewall, is the most reliable, efficient, and comprehensive architectural change to achieve this goal. It provides a single point of data collection for all traffic, ensuring nothing is missed.

An organization wants to implement a platform to better identify which specific assets are affected by a given vulnerability. Which of the following components provides the best foundation to achieve this goal?

A. SASE

B. CMDB

C. SBoM

D. SLM

B.   CMDB

Explanation:
The core goal is vulnerability impact analysis: "Which specific assets are affected by a given vulnerability?" An "asset" in this context is a hardware device, server, virtual machine, or software instance within the organization's IT environment.

A CMDB (Configuration Management Database) is the fundamental system of record designed specifically for this purpose. Here’s why it is the best foundation:

Centralized Inventory of Assets:
A CMDB is a centralized repository that stores information about all the critical IT assets in an organization, often referred to as Configuration Items (CIs). This includes servers, workstations, network devices, and the software installed on them.

Relationship Mapping:
The true power of a CMDB lies not just in listing assets, but in tracking the relationships between them. For example, a CMDB knows:

Which specific software components (e.g., Apache Tomcat v8.5.1) are installed on which servers.

Which servers host a particular business application.

How a virtual machine relates to its underlying physical host.

Which network switch connects to a specific group of servers.

How It Enables Vulnerability Identification:
When a new vulnerability is announced (e.g., a flaw in a specific version of a software library), the organization can query its CMDB.

The query is simple: "Show me all assets (CIs) where the software component 'X' with version 'Y' is installed."
Because the CMDB maintains an accurate and detailed inventory with relationships, it can instantly return a precise list of every affected server, workstation, and application.

This allows security and IT teams to quickly prioritize patching based on the criticality of the affected assets, dramatically reducing the mean time to remediate (MTTR).

In essence, the CMDB provides the essential context—the "what" and "where"—that transforms a generic vulnerability advisory into an actionable list of specific business assets that need attention. It is the cornerstone of IT Service Management (ITSM) and IT Asset Management (ITAM), making it the best foundational component for this goal.

Detailed Analysis of Other Options (Why They Are Incorrect):

A. SASE (Secure Access Service Edge)

Why it's incorrect:
SASE is a security and networking architecture that combines wide-area networking (SD-WAN) with cloud-native security functions (like FWaaS, CASB, ZTNA). Its primary goal is to provide secure and fast access to applications for users, regardless of their location. While SASE solutions may have some security features, they are not designed to be an inventory or asset management system. They do not track detailed software versions on endpoints to determine vulnerability impact. SASE is about enforcing policy and securing access, not inventorying assets.

C. SBoM (Software Bill of Materials)

Why it's incorrect:
An SBoM is a nested inventory, a formal record that lists the components, libraries, and dependencies that make up a piece of software. It is a component-level manifest for a single software application.

The Gap:
An SBoM tells you that Application v2.0 contains Vulnerable Library v1.5. This is incredibly valuable for the software developer or consumer to know a vulnerability exists.

However, an SBoM does not tell you where that application is installed across your entire enterprise. It lacks the asset and relationship context that a CMDB provides. The SBoM is the "what," but the CMDB provides the "where." You need both for complete visibility, but the CMDB is the foundational platform for identifying affected assets.

D. SLM (Service Level Management or Service Level Manager)

Why it's incorrect:
SLM is a process, not a platform or database. It is the practice of defining, monitoring, and managing agreements (SLAs - Service Level Agreements) between IT service providers and their customers. SLM is focused on ensuring services meet predefined performance, availability, and quality standards (e.g., "99.9% uptime").

While understanding the impact of a vulnerability on service levels is crucial for prioritization, SLM itself does not provide the technical inventory data needed to identify which assets are vulnerable. It relies on data from other sources, like a CMDB, to perform its function.

Conclusion:
The question asks for the best foundation to identify assets affected by a vulnerability. This is a core function of asset and configuration management. SASE is a network security framework.

SBoM is a component manifest for a single software product.

SLM is a process for managing service quality.

Only the CMDB serves as the central source of truth for what assets exist, what software they run, and how they are connected. It is the fundamental database that vulnerability scanners and IT service management platforms integrate with to provide precise, actionable impact analysis. Therefore, it is the correct and best foundation for achieving the stated goal.

An audit finding reveals that a legacy platform has not retained loos for more than 30 days The platform has been segmented due to its interoperability with newer technology. As a temporary solution, the IT department changed the log retention to 120 days. Which of the following should the security engineer do to ensure the logs are being properly retained?

A. Configure a scheduled task nightly to save the logs

B. Configure event-based triggers to export the logs at a threshold.

C. Configure the SIEM to aggregate the logs

D. Configure a Python script to move the logs into a SQL database.

C.   Configure the SIEM to aggregate the logs

Explanation:
The core problem is ensuring long-term log retention on a legacy, segmented system. The temporary local fix (increasing retention to 120 days) is unreliable for several reasons: the local hard drive could fill up, the legacy system might crash and lose data, or the setting might accidentally be reverted.

The only way to ensure logs are properly retained is to get them off the legacy system entirely and onto a dedicated, secure, and managed logging platform.

A SIEM (Security Information and Event Management) system is specifically designed for this purpose. Here’s why it is the best and most professional solution:

Centralized Aggregation:
The primary function of a SIEM is to aggregate logs from thousands of different sources across an entire organization (servers, network devices, applications, etc.) into a single, centralized platform.

Guaranteed Retention:
Once the logs are in the SIEM, they are subject to the SIEM's own robust retention policies. The SIEM is built on scalable storage (often in a secure, modern environment) designed to hold massive amounts of log data for exact periods (like 120 days or years) as required by compliance and security policy. This eliminates the risk of log loss due to issues on the legacy device.

Additional Security Value:
Beyond simple retention, aggregating logs into a SIEM provides immense security benefits:

Correlation:
The logs from the legacy system can be correlated with logs from other systems to detect complex, multi-stage attacks.

Analysis:
Security analysts can search and investigate activity across the entire enterprise from one interface.

Alerting:
The SIEM can generate alerts based on specific suspicious events from the legacy system.

Addresses the Segmentation:
The legacy system is segmented, likely meaning it has restricted network communication. The security engineer would need to work with the network team to create a secure, limited firewall rule allowing the legacy system to send its logs only to the SIEM's ingestion port. This is a common and accepted practice that maintains security while solving the retention problem.

In summary, configuring the SIEM to pull or receive logs from the legacy system is the most reliable, scalable, and secure method to ensure proper long-term retention and also adds significant security visibility.

Detailed Analysis of Other Options (Why They Are Incorrect):

A. Configure a scheduled task nightly to save the logs

D. Configure a Python script to move the logs into a SQL database.

Why they are incorrect:
These are both forms of custom, local scripting. While they might work, they are fragile, inefficient, and do not represent industry best practices, especially for a critical requirement like audit compliance.

Single Point of Failure:
Both the script and the scheduled task run on the legacy system itself. If the system fails, the script fails, and logs are lost.

Lack of Integrity:
A simple "save" or "move" script could be tampered with or could fail silently.

Management Overhead:
Who maintains the Python script or the scheduled task? What happens if the legacy OS is updated (unlikely but possible)? This solution creates technical debt.

SQL Database Misuse:
A relational database like SQL is not the ideal tool for storing massive volumes of sequential log data. It is inefficient for write-heavy operations and makes searching and analysis difficult compared to a tool built for logs, like a SIEM.

B. Configure event-based triggers to export the logs at a threshold.

Why it's incorrect:
This solution is reactive and incomplete.

Not Proactive:
An event-based trigger (e.g., "export logs when disk space is 80% full") risks missing data. What if the disk fills up rapidly due to an attack? The trigger might not fire in time, and logs could be overwritten.

Incomplete Export:
It does not guarantee a continuous, complete export of all logs. It only exports when a specific event occurs, which could lead to gaps in the log record.

Where to Export?
This option doesn't specify a secure destination. It might just export to another location on the same vulnerable legacy system, solving nothing.

Conclusion:
The temporary local fix is a good first step but is not robust. The security engineer must implement a solution that moves the logs to a secure, centralized, and professionally managed platform. A SIEM is the industry-standard tool for exactly this purpose. It ensures compliance with the 120-day retention policy, provides enhanced security analysis capabilities, and is a more reliable and sustainable solution than any custom-built script or local task. Therefore, Option C is the correct and most effective choice.

A developer needs to improve the cryptographic strength of a password-storage component in a web application without completely replacing the crypto-module. Which of the following is the most appropriate technique?

A. Key splitting

B. Key escrow

C. Key rotation

D. Key encryption

E. Key stretching

E.   Key stretching

Explanation:

Why E is Correct:
Key stretching is a technique specifically designed to strengthen weak passwords, such as those entered by users. It works by taking a password and passing it through a computationally intensive algorithm (like PBKDF2, bcrypt, or Argon2) that requires a significant amount of time and resources to compute. This dramatically increases the effort required for an attacker to perform a brute-force or dictionary attack, as each guess must go through the same slow process. This can be implemented on top of the existing hashing mechanism (e.g., moving from a single SHA-256 hash to PBKDF2 with SHA-256 and a high iteration count) without necessarily replacing the entire underlying cryptographic module.

Why A is Incorrect:
Key splitting involves dividing a cryptographic key into multiple parts (shards) that are distributed to different entities. This is used for securing keys and enforcing control, not for strengthening the cryptographic process of password derivation.

Why B is Incorrect:
Key escrow is the process of depositing a cryptographic key with a trusted third party to be stored for emergency access (e.g., by law enforcement). This is a governance and recovery mechanism, not a technique for improving cryptographic strength.

Why C is Incorrect:
Key rotation is the practice of retiring an encryption key and replacing it with a new one at regular intervals. This is a vital practice for limiting the blast radius of a potential key compromise but does not inherently make the algorithm used to derive a key from a password any stronger. The password-to-key process could still be weak and vulnerable to attack.

Why D is Incorrect:
Key encryption (or key wrapping) is the process of encrypting one key with another key. This is used for secure key storage and transmission. While the stored password hashes should be encrypted at rest, this is a separate control. The core weakness of simple password hashing is the speed of the hashing operation, which key encryption does not address.

Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It specifically addresses cryptographic techniques and their appropriate application, focusing on secure password storage mechanisms as outlined in best practices and standards like NIST SP 800-63B.

A security engineer performed a code scan that resulted in many false positives. The security engineer must find a solution that improves the quality of scanning results before application deployment. Which of the following is the best solution?

A. Limiting the tool to a specific coding language and tuning the rule set

B. Configuring branch protection rules and dependency checks

C. Using an application vulnerability scanner to identify coding flaws in production

D. Performing updates on code libraries before code development

A.   Limiting the tool to a specific coding language and tuning the rule set

Explanation:

Why A is Correct:
This is the most direct and effective solution to the specific problem of "many false positives" from a code scan. Static Application Security Testing (SAST) tools are notorious for generating false positives, which can overwhelm developers and lead to real issues being ignored.

Limiting to a specific language:
SAST tools perform best when they are optimized for a particular language's syntax and common pitfalls. Running a tool configured for multiple languages against a codebase written primarily in one language can trigger irrelevant rules and generate false positives.

Tuning the rule set:
This is the critical step for reducing false positives. It involves customizing the tool's rules to match the application's specific framework, libraries, and architecture. This can include:

Disabling rules that are not relevant to the project.

Adjusting the severity of certain findings.

Creating custom rules to ignore known benign patterns specific to the codebase.

Providing the tool with paths to custom libraries so it can accurately track data flow.

Tuning transforms a generic scanner into a precise tool tailored to the environment, dramatically improving the signal-to-noise ratio.

Why B is Incorrect:
Configuring branch protection rules (e.g., requiring pull requests and approvals before merging) and dependency checks (SCA - Software Composition Analysis) are excellent DevOps security practices. However, they address different problems. Branch protection enforces process, and dependency checks find vulnerabilities in third-party libraries. Neither practice directly reduces the false positive rate of a SAST tool scanning custom code for flaws.

Why C is Incorrect:
Using an application vulnerability scanner (DAST - Dynamic Application Security Testing) in production is a reactive measure. It finds vulnerabilities in a running application after it has been deployed. The question is about improving the scan results before deployment. Furthermore, running a DAST tool does not fix the root cause of the poor results from the SAST (code scan) tool; it simply uses a different, later-stage tool to find a different class of issues.

Why D is Incorrect:
Updating code libraries is a crucial maintenance activity for patching known vulnerabilities in dependencies (addressed by SCA tools). However, it has no bearing on the accuracy of a SAST tool scanning the company's own custom code for logical flaws and coding errors. The false positives are generated by the tool's analysis of the code structure, not by the version of the libraries used during development.

Reference:
This question falls under Domain 2.0: Security Operations, specifically concerning security testing in the development lifecycle and the integration and management of tools like SAST to improve software security. It also touches on the analytical skill of selecting the correct mitigation for a given problem.

Audit findings indicate several user endpoints are not utilizing full disk encryption During me remediation process, a compliance analyst reviews the testing details for the endpoints and notes the endpoint device configuration does not support full disk encryption Which of the following is the most likely reason me device must be replaced'

A. The HSM is outdated nand no longer supported by the manufacturer

B. The vTPM was not properly initialized and is corrupt.

C. The HSM is vulnerable to common exploits and a firmware upgrade is needed

D. The motherboard was not configured with a TPM from the OEM supplier

E. The HSM does not support sealing storage

D.   The motherboard was not configured with a TPM from the OEM supplier

Explanation:

Why D is Correct:
Full disk encryption (FDE) solutions like BitLocker (Windows) or FileVault (macOS) have a strict hardware requirement: a Trusted Platform Module (TPM). A TPM is a dedicated cryptographic processor chip soldered onto the computer's motherboard.

If the audit finding states that the device configuration "does not support full disk encryption," the most fundamental and common reason is that the motherboard lacks this specific hardware component entirely.

Older computers or some very low-cost models were manufactured and sold without a TPM chip. Since the TPM is a physical hardware requirement, it cannot be added via software. The only remediation for such a device is to replace it with hardware that meets the compliance requirement (i.e., a motherboard with a TPM).

Why A, C, and E are Incorrect (HSM):
These options incorrectly refer to an HSM (Hardware Security Module). An HSM is a high-performance, external, or PCIe-based network device used to manage and protect cryptographic keys for servers, certificate authorities, and critical infrastructure. HSMs are not used for standard endpoint full-disk encryption. Endpoints use a TPM, which is a much smaller, cheaper, and less powerful cryptographic co-processor designed specifically for this purpose. Confusing TPM and HSM is a common distractor in exam questions.

Why B is Incorrect (vTPM):
A vTPM (virtual TPM) is a software-based implementation of a TPM used in virtual machines to provide the same functionality. The question is about physical "user endpoints" (e.g., laptops, desktops). A vTPM is not relevant to the physical hardware of an endpoint device. Furthermore, if a vTPM were corrupt, it could potentially be re-initialized or re-provisioned through software or hypervisor management, not necessarily requiring a full hardware replacement.

Reference:
This question falls under Domain 1.0: Security Architecture and Domain 4.0: Governance, Risk, and Compliance. It tests knowledge of hardware security capabilities (TPM vs. HSM) and the practical implications of enforcing compliance policies that have specific hardware requirements.

Page 1 out of 11 Pages

This study tool turned my anxiety into confidence


As a cybersecurity professional with several years of experience, I thought I was well-prepared for the CompTIA CAS-005 exam. However, once I started studying, I quickly realized this was not just another certification test - it demanded deep, practical understanding of complex security architectures and risk management strategies. Thats where these CAS-005 practice questions made all the difference.

What impressed me most was how the questions mirrored the exams focus on real-world scenarios. Instead of simple recall questions, they presented multi-layered challenges that required me to analyze security frameworks, evaluate enterprise risks, and recommend comprehensive solutions - just like I would in my actual job. The explanations were incredibly thorough, helping me understand not just what the right answer was, but why it was correct and how it applied to different organizational contexts.

By working through these practice questions, I developed the critical thinking skills needed to approach the exam with confidence. They didnt just test my knowledge - they trained me to think like a security architect. When exam day came, I recognized the same style of complex, scenario-based questions I had been practicing with. This resource was absolutely essential in bridging the gap between my experience and what the certification demanded.

Michael R., Security Architect