CompTIA CAS-005 Practice Test

Prepare smarter and boost your chances of success with our CompTIA CAS-005 Practice test. This test helps you assess your knowledge, pinpoint strengths, and target areas for improvement. Surveys and user data from multiple platforms show that individuals who use CAS-005 practice exam are 40–50% more likely to pass on their first attempt.

Start practicing today and take the fast track to becoming CompTIA CAS-005 certified.

11030 already prepared
Updated On : 11-Sep-2025
103 Questions
4.8/5.0

Page 3 out of 11 Pages

A company wants to implement hardware security key authentication for accessing sensitive information systems The goal is to prevent unauthorized users from gaining access with a stolen password Which of the following models should the company implement to b«st solve this issue?

A. Rule based

B. Time-based

C. Role based

D. Context-based

B.   Time-based

Explanation:

Why B is Correct:
The question describes the implementation of hardware security keys (e.g., YubiKey, Google Titan) to prevent access with a stolen password. This is a classic description of multi-factor authentication (MFA) where the hardware key provides the "something you have" factor.

The most common protocol used by these hardware keys for generating the one-time passcode is the Time-based One-Time Password (TOTP) algorithm. This algorithm generates a code that is synchronized with the authentication server and changes every 30-60 seconds. Even if a password is stolen, an attacker cannot access the system without physically possessing the hardware key that generates the current, valid code. Therefore, the company is implementing a time-based authentication model.

Why A is Incorrect:
Rule-based access control involves making access decisions based on a set of predefined rules or filters (e.g., "Allow access if the request comes from the HR network segment"). It is a type of access control model, not an authentication factor model. It does not describe how the one-time code from a hardware key is generated.

Why C is Incorrect:
Role-based access control (RBAC) is an authorization model where access permissions are assigned to roles, and users are assigned to those roles. It governs what a user can do after they are authenticated. The question is specifically about the authentication process (proving identity), not authorization (assigning permissions).

Why D is Incorrect:
Context-based authentication is a more advanced form of MFA that considers additional contextual factors (e.g., geographic location, time of day, network reputation, device posture) when making an authentication decision. While a hardware key could be part of a context-based system, the core functionality described—using a hardware token to generate a one-time code—is fundamentally time-based. Context-based would be a broader, more adaptive model that might use time-based codes as one input.

Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It tests the understanding of authentication protocols and factors, specifically the operation of hardware security tokens and the underlying time-based model that makes them secure.

A systems administrator wants to use existing resources to automate reporting from disparate security appliances that do not currently communicate. Which of the following is the best way to meet this objective?

A. Configuring an API Integration to aggregate the different data sets

B. Combining back-end application storage into a single, relational database

C. Purchasing and deploying commercial off the shelf aggregation software

D. Migrating application usage logs to on-premises storage

A.   Configuring an API Integration to aggregate the different data sets

Explanation:

Why A is Correct:
The core requirements are to automate reporting from disparate security appliances that do not currently communicate, using existing resources. APIs (Application Programming Interfaces) are the standard method for enabling different software systems to communicate and share data. Most modern security appliances (firewalls, IDS/IPS, EDR, etc.) have APIs designed specifically for this purpose—to extract logs, alerts, and configuration data.

Automation:
By writing scripts (e.g., in Python) that call these APIs, the systems administrator can automatically pull data from each disparate appliance on a scheduled basis without manual intervention.

Aggregation:
The data collected from these various APIs can then be parsed, normalized, and aggregated into a single format for reporting (e.g., fed into a dashboard, a SIEM, or a custom database). This approach directly leverages existing appliance capabilities (their APIs) and can often be implemented with existing scripting skills and resources.

Why B is Incorrect:
Combining back-end application storage into a single relational database is often not feasible. The appliances likely use different, proprietary storage formats and databases. Directly combining these back-ends would require deep access to each system, risk corruption, and is not a standard or supported method for integration. APIs are the intended, supported way to access this data.

Why C is Incorrect:
Purchasing commercial off-the-shelf (COTS) aggregation software (like a SIEM or a dedicated log management tool) is a very common and effective solution. However, the question specifies the administrator wants to use existing resources. Purchasing new software contradicts this requirement, as it involves acquiring new resources (budget, software, and potentially hardware).

Why D is Incorrect:
Migrating logs to on-premises storage is a data consolidation step, but it does not solve the communication or automation problem. You would still have logs in different formats from different systems sitting in the same storage location. Without a way to parse, normalize, and aggregate them (a function an API integration or a SIEM performs), you cannot automate reporting from them. This is just moving the data, not making it usable for automated reporting.

Reference:
This question falls under Domain 2.0: Security Operations. It tests the practical knowledge of how to integrate security tools and automate processes, a key skill for security analysts and engineers. Using APIs is the modern, scalable, and resource-efficient method for achieving this integration.

A cloud engineer needs to identify appropriate solutions to:

• Provide secure access to internal and external cloud resources.

• Eliminate split-tunnel traffic flows.

• Enable identity and access management capabilities.

Which of the following solutions arc the most appropriate? (Select two).

A. Federation

B. Microsegmentation

C. CASB

D. PAM

E. SD-WAN

F. SASE

A.   Federation
F.   SASE

Explanation:
Let's break down the requirements and see which solutions best address them:

Provide secure access to internal and external cloud resources:
This requires a solution that can securely connect users to applications, whether they are in a corporate data center, a public cloud (IaaS/PaaS), or a SaaS application (like Office 365).

Eliminate split-tunnel traffic flows:
Split tunneling allows some user traffic to go directly to the internet while other traffic goes through the corporate network. To eliminate this, all user traffic must be routed through a central security checkpoint for inspection and enforcement.

Enable identity and access management capabilities:
The solution must integrate strongly with identity systems to enforce access policies based on user identity, group, and other context.

Why F is Correct (SASE):
Secure Access Service Edge (SASE) is the overarching architecture that perfectly meets all three requirements.

It provides secure, identity-driven access to all resources (internal and cloud-based) from anywhere.

A core principle of SASE is to funnel all user traffic through a cloud-based security stack (SWG, CASB, ZTNA, FWaaS), which eliminates split tunneling by ensuring all traffic is inspected.

It has identity and access management as a foundational component, using user identity as the key for applying security policies.

Why A is Correct (Federation):
Federation (e.g., using SAML, OIDC) is a critical identity capability that integrates with a SASE solution to fulfill the IAM requirement.

It allows users to authenticate once with a central identity provider (like Azure AD) and gain seamless access to multiple cloud services and applications without needing separate passwords.

This provides the strong identity and access management foundation that a SASE platform uses to make access decisions. SASE relies on federated identity to know who the user is before applying policy.

Why the Other Options Are Incorrect:

B. Microsegmentation:
This is for controlling east-west traffic between workloads within a data center or cloud network. It does not address secure user access to resources or internet-bound traffic flows.

C. CASB (Cloud Access Security Broker):
A CASB is a component that can be part of a SASE solution. It secures access to SaaS applications and provides data security for cloud services. However, by itself, it does not eliminate split tunneling for all internet traffic or provide secure access to internal resources—it's focused on cloud services. SASE is the broader architecture that incorporates CASB functionality.

D. PAM (Privileged Access Management):
PAM is used to secure, manage, and monitor access for privileged accounts (e.g., administrators). It is a critical security solution but is focused on a specific set of users and systems, not the general workforce's secure access to all cloud resources.

E. SD-WAN (Software-Defined Wide Area Network):
SD-WAN is a technology for intelligently routing traffic between branch offices and data centers. It optimizes network performance but is not a security solution. In fact, traditional SD-WAN can create split tunnels. SASE often incorporates SD-WAN capabilities but adds the crucial security and identity layer.

Reference:
This question falls under Domain 1.0: Security Architecture. It tests the understanding of modern secure access architectures, specifically how SASE converges networking and security functions with identity to address the challenges of cloud-centric and remote work environments. Federation is the key identity component that enables this.

A company detects suspicious activity associated with external connections Security detection tools are unable to categorize this activity. Which of the following is the best solution to help the company overcome this challenge?

A. Implement an Interactive honeypot

B. Map network traffic to known loCs.

C. Monitor the dark web

D. implement UEBA

D.    implement UEBA

Explanation:

Why D is Correct:
The core challenge is that "security detection tools are unable to categorize" the "suspicious activity." This indicates that the activity does not match any known signatures, patterns, or Indicators of Compromise (IoCs). This is a classic scenario for User and Entity Behavior Analytics (UEBA).

UEBA uses machine learning and advanced analytics to establish a baseline of normal behavior for users, hosts, and network entities.

It then detects anomalies that deviate from this baseline, without relying on known threat signatures.

This makes it exceptionally effective at identifying novel attacks, insider threats, and suspicious activity that evades traditional, signature-based detection tools. It can categorize unknown activity based on its anomalous nature.

Why A is Incorrect:
An interactive honeypot is a decoy system designed to attract and engage attackers to study their techniques. While it can provide valuable intelligence on new attack methods, it is a proactive research tool, not a direct solution for detecting and categorizing ongoing, suspicious activity on the production network. The suspicious activity is already happening; a honeypot wouldn't help analyze it.

Why B is Incorrect:
Mapping network traffic to known IoCs is the function of traditional signature-based tools like IDS/IPS and many SIEM rules. The problem states that these tools have already failed to categorize the activity, meaning it does not match any known IoCs. Therefore, this approach will not help overcome the challenge.

Why C is Incorrect:
Monitoring the dark web is a strategic intelligence-gathering activity. It is used to find stolen credentials, learn about upcoming attacks, or discover if company data is for sale. It is not a tactical solution for analyzing and categorizing specific, ongoing suspicious network activity within the company's environment.

Reference:
This question falls under Domain 2.0: Security Operations. It tests the knowledge of advanced security analytics tools and their appropriate application. UEBA is specifically designed to address the limitation of traditional tools by using behavioral analysis to detect unknown threats and anomalous activity.

A network engineer must ensure that always-on VPN access is enabled Curt restricted to company assets Which of the following best describes what the engineer needs to do''

A. Generate device certificates using the specific template settings needed

B. Modify signing certificates in order to support IKE version 2

C. Create a wildcard certificate for connections from public networks

D. Add the VPN hostname as a SAN entry on the root certificate

A.    Generate device certificates using the specific template settings needed

Explanation:

Why A is Correct:
The requirement has two key parts:

Always-on VPN:
This means the VPN connection is established automatically, typically at device startup or user logon, without user interaction.

Restricted to company assets:
This means only devices that are owned and managed by the company should be able to connect.

The best way to meet both requirements is through device certificate authentication. In this model:

Each company-issued device is provisioned with a unique device certificate issued by the company's own Private Public Key Infrastructure (PKI).

The VPN gateway is configured to only accept connection attempts that present a valid certificate from this specific PKI.

The "always-on" feature can be configured to use this certificate for automatic authentication without requiring user input.

This effectively restricts access to devices that possess this certificate (i.e., company assets). Non-company devices will lack the required certificate and be unable to connect.

The network engineer would need to ensure the certificate templates in the PKI are configured correctly to issue certificates with the necessary properties (e.g., client authentication EKU) for this purpose.

Why B is Incorrect:
Modifying signing certificates for IKEv2 relates to the cryptographic negotiation of the VPN tunnel itself. While IKEv2 is a common protocol that supports certificate authentication, this option does not address the core requirement of restricting access to company assets. It is a step in configuring the protocol, not the access control method.

Why C is Incorrect:
A wildcard certificate is used to secure multiple subdomains under a single domain name (e.g., *.example.com). It is used for TLS/SSL encryption for web services, not for client device authentication. Using a wildcard certificate for VPN clients would be a major security anti-pattern, as the same certificate would be on every device, making it impossible to distinguish or revoke individual devices. It violates the principle of unique device identity.

Why D is Incorrect:
Adding the VPN hostname as a Subject Alternative Name (SAN) on the root certificate is incorrect and nonsensical. The root certificate is the top-level, trusted anchor of a PKI hierarchy and should be kept offline and secure. Server certificates (not root certificates) for the VPN gateway itself contain the SAN field to list the DNS names they are valid for (e.g., vpn.company.com). This is important for ensuring clients are connecting to the legitimate server but does nothing to authenticate or restrict the client devices that are connecting.

Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It tests the practical application of PKI and certificate-based authentication to achieve specific security goals like device compliance and automated access in a zero-trust framework.

A security analyst received a notification from a cloud service provider regarding an attack detected on a web server The cloud service provider shared the following information about the attack:

• The attack came from inside the network.

• The attacking source IP was from the internal vulnerability scanners.

• The scanner is not configured to target the cloud servers.

Which of the following actions should the security analyst take first?

A. Create an allow list for the vulnerability scanner IPs m order to avoid false positives

B. Configure the scan policy to avoid targeting an out-of-scope host

C. Set network behavior analysis rules

D. Quarantine the scanner sensor to perform a forensic analysis

D.   Quarantine the scanner sensor to perform a forensic analysis

Explanation:

Why D is Correct:
The scenario describes a highly anomalous and potentially severe situation. The key clues are:

The attack came from an internal IP address assigned to a vulnerability scanner.

The scanner is not configured to target the cloud servers.

This indicates the scanner itself is likely compromised. An attacker has likely gained control of the vulnerability scanner and is using its capabilities, permissions, and internal network position to launch attacks against other systems (in this case, cloud servers).

The first and most critical action is to contain the threat. Quarantining the scanner sensor immediately isolates it from the network, preventing it from causing further damage or being used to pivot to other systems. After containment, a forensic analysis is required to determine how it was compromised, what the attacker did, and what data might have been accessed. This is an incident response priority.

Why A is Incorrect:
Creating an allow list for the scanner's IP would be a disastrous action. It would effectively tell the security systems to ignore all malicious activity originating from the compromised scanner, allowing the attacker to operate with impunity. This is the opposite of what should be done.

Why B is Incorrect:
Reconfiguring the scan policy is a corrective action for a misconfiguration. The problem is not a misconfiguration; the problem is that the scanner itself is behaving maliciously against its configuration. This implies the scanner is under external control, making reconfiguration irrelevant until the device itself is investigated and secured.

Why C is Incorrect:
Setting network behavior analysis rules is a good proactive measure for detecting anomalies in the future. However, the attack has already been detected. This is a reactive incident response scenario, and the immediate priority is to stop the active attack, not to create new detection rules. This can be done after the compromised system is contained.

Reference:
This question falls under Domain 2.0: Security Operations, specifically focusing on incident response procedures. It tests the understanding of the incident response lifecycle, where the first steps are always to contain and then eradicate a threat. The anomalous behavior of a trusted security tool is a major red flag that indicates a compromise, requiring immediate isolation.

A company isolated its OT systems from other areas of the corporate network These systems are required to report usage information over the internet to the vendor Which oi the following b*st reduces the risk of compromise or sabotage' (Select two).

A. Implementing allow lists

B. Monitoring network behavior

C. Encrypting data at rest

D. Performing boot Integrity checks

E. Executing daily health checks

F. Implementing a site-to-site IPSec VPN

A.   Implementing allow lists
F.    Implementing a site-to-site IPSec VPN

Explanation:
The scenario involves Operational Technology (OT) systems (e.g., industrial control systems, SCADA) that are isolated from the corporate network but must send usage data to an external vendor over the internet. The goal is to reduce the risk of compromise or sabotage.

Why A is Correct (Implementing allow lists):
For OT systems, which often have known and fixed behavior, allow lists (whitelisting) are a highly effective security control.

Network Allow Lists:
At the firewall, configure rules to only allow the OT systems to communicate with the specific vendor IP addresses and ports required for reporting. Block all other outbound and inbound traffic. This drastically reduces the attack surface.

Application/Execution Allow Lists:
On the OT systems themselves, use application allow listing to prevent unauthorized software from executing, which is a key defense against malware.

Why F is Correct (Implementing a site-to-site IPSec VPN):
The requirement is to send data "over the internet." Transmitting this data in cleartext would expose it to interception and potentially allow for sabotage (e.g., malicious injection of false commands or data). An IPSec VPN creates an encrypted tunnel between the OT network and the vendor's network.

This ensures the confidentiality and integrity of the data in transit, protecting it from eavesdropping or modification.

It can also provide mutual authentication, ensuring the OT systems are only talking to the legitimate vendor and not an impersonator.

Why the Other Options Are Incorrect:

B. Monitoring network behavior:
While important, this is a detective control, not a preventive one. It can help you discover an attack in progress but does nothing to reduce the risk of the initial compromise or sabotage. Prevention (allow lists, encryption) is prioritized over pure detection in this context.

C. Encrypting data at rest:
This protects data stored on the OT systems. The primary risk described is related to data being transmitted over the internet to the vendor. Data at rest encryption does not address the network transmission risk.

D. Performing boot integrity checks:
This (e.g., using UEFI Secure Boot) ensures that a system boots using only trusted software. It's a great control for preventing persistent low-level malware. However, it does not secure the network pathway to the vendor, which is the explicit vulnerability in the scenario.

E. Executing daily health checks:
This is an operational maintenance task. Like monitoring, it can help identify problems but is not a direct security control that mitigates the risk of external network-based compromise or sabotage during data transmission.

Reference:
This question falls under Domain 1.0: Security Architecture, specifically covering secure network design for specialized environments like OT/ICS. It tests the knowledge of applying fundamental security principles (least privilege via allow lists, securing data in transit via VPNs) to a high-stakes scenario.

Company A acquired Company B and needs to determine how the acquisition will impact the attack surface of the organization as a whole. Which of the following is the best way to achieve this goal? (Select two). Implementing DLP controls preventing sensitive data from leaving Company B's network

A. Documenting third-party connections used by Company B

B. Reviewing the privacy policies currently adopted by Company B

C. Requiring data sensitivity labeling tor all files shared with Company B

D. Forcing a password reset requiring more stringent passwords for users on Company B's network

E. Performing an architectural review of Company B's network

A.   Documenting third-party connections used by Company B
E.   Performing an architectural review of Company B's network

Explanation:
The goal is to understand how the acquisition impacts the overall attack surface. The attack surface is the sum of all potential vulnerabilities and entry points an attacker could exploit. Company A needs to discover and assess all the new components Company B is bringing into the organization.

Why E is Correct (Performing an architectural review of Company B's network):
This is the most comprehensive and direct method to understand the new attack surface. An architectural review would involve mapping:

Network segments and trust relationships.

Internet-facing assets (web servers, VPN gateways).

Internal critical servers and databases

Security control chokepoints (firewalls, IDS/IPS).

Cloud environments and SaaS applications used.

This review provides a complete picture of the technical attack surface being acquired.

Why A is Correct (Documenting third-party connections used by Company B):
Third-party connections (e.g., vendor VPNs, API integrations, supply chain links) are a major and often overlooked part of an organization's attack surface. A breach at a third party can easily become a breach at Company B, and now Company A. Documenting these connections is crucial for understanding:

What external entities have access to the network.

The scope of that access.

The security posture of those third parties.

This reveals the external supply chain and partnership aspect of the attack surface.

Why the Other Options Are Incorrect:

Implementing DLP controls...
This is a remediation or risk mitigation action, not an assessment action. The question asks for the best way to determine the impact on the attack surface (i.e., to assess and discover), not to immediately fix it. You must first understand the surface before you can protect it.

B. Reviewing the privacy policies...
While important for GDPR/CCPA compliance and understanding data handling practices, privacy policies are high-level documents. They do not provide the technical details needed to map specific vulnerabilities, entry points, or network connections that constitute an attack surface.

C. Requiring data sensitivity labeling...
This is another remediation control for data governance and protection (likely to be done after the assessment). It does not help in discovering what the attack surface is; it helps in protecting the data once the landscape is understood.

D. Forcing a password reset...
This is a specific hardening technique for credential security. It addresses one very specific potential vulnerability but does nothing to reveal the entirety of the new network architecture, applications, and third-party connections that Company A is now responsible for.

Reference:
This question falls under Domain 1.0: Security Architecture and Domain 4.0: Governance, Risk, and Compliance. It tests the process of security due diligence during a merger & acquisition (M&A), focusing on the critical steps of discovery and assessment to understand risk exposure. The first steps are always to assess the architecture and document connections.

Users must accept the terms presented in a captive petal when connecting to a guest network. Recently, users have reported that they are unable to access the Internet after joining the network A network engineer observes the following:

• Users should be redirected to the captive portal.

• The Motive portal runs Tl. S 1 2

• Newer browser versions encounter security errors that cannot be bypassed

• Certain websites cause unexpected re directs

Which of the following mow likely explains this behavior?

A. The TLS ciphers supported by the captive portal ate deprecated

B. Employment of the HSTS setting is proliferating rapidly.

C. Allowed traffic rules are causing the NIPS to drop legitimate traffic

D. An attacker is redirecting supplicants to an evil twin WLAN.

B.   Employment of the HSTS setting is proliferating rapidly.

Explanation:
The symptoms point directly to a problem with the captive portal's security configuration interacting with modern browser security features:

The Problem:
Users can't access the internet because they aren't reaching the captive portal. Newer browsers show security errors that cannot be bypassed.

The Key Clue:
The captive portal runs TLS 1.2. This is a secure protocol, but the issue isn't the protocol version itself.

The Root Cause:
HTTP Strict Transport Security (HSTS) is a web security policy mechanism that forces a web browser to interact with a website only over secure HTTPS connections. Crucially, it tells browsers to never allow a user to bypass certificate warnings.

A captive portal works by intercepting HTTP requests and redirecting them to an HTTP portal page. This interception often uses a self-signed or non-public certificate to perform the HTTPS redirect, which browsers with HSTS preloads or policies for common sites will reject outright, with no option for the user to proceed.

The observation that "certain websites cause unexpected redirects" aligns with HSTS; a browser that has an HSTS policy for example.com will refuse to connect to a captive portal that is trying to intercept and redirect that specific traffic because it cannot verify the portal's certificate authority.

Why B is Correct:
The widespread adoption and preloading of HSTS by major websites (its "proliferation") is the most likely reason that a previously working captive portal is now failing. Modern browsers are becoming increasingly strict about enforcing HSTS policies, making traditional captive portal techniques obsolete.

Why the Other Options Are Incorrect:

A. The TLS ciphers supported by the captive portal are deprecated:
While deprecated ciphers can cause errors, these errors are usually more descriptive and often can be bypassed by the user. The fact that the errors cannot be bypassed is the critical detail that points to HSTS enforcement, not a weak cipher.

C. Allowed traffic rules are causing the NIPS to drop legitimate traffic:
A Network Intrusion Prevention System (NIPS) dropping traffic could prevent access, but it would not cause security errors in the browser. The browser error indicates a TLS/SSL handshake or certificate trust issue between the client and the portal, not a silent packet drop by a network device.

D. An attacker is redirecting supplicants to an evil twin WLAN:
An evil twin attack could explain redirects and lack of access. However, it would not explain the specific symptom of security errors that cannot be bypassed in newer browsers. An evil twin would likely present a login page that mimics the real one, not a browser-level security error that blocks the page from loading entirely.

Reference:
This question falls under Domain 1.0: Security Architecture and Domain 3.0: Security Engineering and Cryptography. It tests the understanding of web security mechanisms (HSTS) and their real-world impact on network services like captive portals, requiring architects to adapt designs to evolving security standards.

A security review revealed that not all of the client proxy traffic is being captured. Which of the following architectural changes best enables the capture of traffic for analysis?

A. Adding an additional proxy server to each segmented VLAN

B. Setting up a reverse proxy for client logging at the gateway

C. Configuring a span port on the perimeter firewall to ingest logs

D. Enabling client device logging and system event auditing

C.   Configuring a span port on the perimeter firewall to ingest logs

Explanation:

Why C is Correct:
The goal is to capture client proxy traffic that is currently being missed. The most efficient and comprehensive way to capture network traffic for analysis is at a central chokepoint through which all traffic flows.

The perimeter firewall is such a chokepoint, as all traffic between the internal network and the internet must pass through it.

A SPAN port (Switch Port Analyzer) or mirror port on a network device (like a firewall or core switch) is specifically designed for this purpose. It copies all network packets seen on a source port (or entire VLAN) and sends them to a destination port where a monitoring tool (like a packet analyzer, IDS, or SIEM) is connected.

By configuring a SPAN port on the perimeter firewall, the security team can get a complete copy of all inbound and outbound traffic, ensuring no client proxy traffic is missed, regardless of which proxy server it was supposed to go to.

Why A is Incorrect:
Adding more proxy servers increases the points of failure and management complexity. If traffic isn't being captured now, it's likely because clients are bypassing the proxy or there's a misconfiguration. Adding more proxies doesn't guarantee all traffic will be forced through them. A SPAN port captures traffic regardless of whether it goes through a proxy or not.

Why B is Incorrect:
A reverse proxy is placed in front of servers (e.g., web servers) to handle incoming requests for them. It is used for load balancing, SSL termination, and security for servers. It is not used for logging outbound client traffic to the internet, which is the function of a forward proxy. This solution is aimed at the wrong direction of traffic flow.

Why D is Incorrect:
Enabling logging on client devices is a host-based solution. While it can provide valuable data, it is:

Highly inefficient:
It requires configuring and collecting logs from every single endpoint.

Less reliable:
Logs can be tampered with if a device is compromised.

Not comprehensive:
It may not capture the full network traffic data needed for deep analysis.

This approach is cumbersome and does not scale as well as a network-based solution like a SPAN port.

Reference:
This question falls under Domain 2.0: Security Operations and Domain 1.0: Security Architecture. It tests the knowledge of network monitoring techniques and the appropriate architectural solutions for gaining visibility into network traffic. Using SPAN ports for packet capture is a fundamental method for traffic analysis and intrusion detection.

Page 3 out of 11 Pages
CAS-005 Practice Test Previous