CompTIA CAS-005 Practice Test
Prepare smarter and boost your chances of success with our CompTIA CAS-005 Practice test. This test helps you assess your knowledge, pinpoint strengths, and target areas for improvement. Surveys and user data from multiple platforms show that individuals who use CAS-005 practice exam are 40–50% more likely to pass on their first attempt.
Start practicing today and take the fast track to becoming CompTIA CAS-005 certified.
11030 already prepared
Updated On : 11-Sep-2025103 Questions
4.8/5.0
An organization wants to implement a platform to better identify which specific assets are affected by a given vulnerability. Which of the following components provides the best foundation to achieve this goal?
A. SASE
B. CMDB
C. SBoM
D. SLM
Explanation:
Why B is Correct:
A CMDB (Configuration Management Database) is a fundamental component of IT Service Management (ITSM) and security operations. It is a centralized repository that stores information about all the organization's hardware and software assets (CIs - Configuration Items) and the relationships between them.
The primary purpose of a CMDB is to provide a single source of truth for "what we have" and "how it's connected."
When a new vulnerability (e.g., a CVE) is announced, security teams can query the CMDB to quickly and accurately identify all assets that have the vulnerable software version installed or that possess the vulnerable hardware component. This allows for precise impact assessment and targeted remediation.
Why A is Incorrect:
SASE (Secure Access Service Edge) is a network architecture that combines networking and security functions (like SWG, CASB, ZTNA, FWaaS) into a single, cloud-delivered service. Its primary goal is to provide secure access to applications and data for users anywhere, not to maintain an inventory of assets for vulnerability impact analysis.
Why C is Incorrect:
An SBOM (Software Bill of Materials) is a nested inventory list that details all components, libraries, and dependencies that make up a specific piece of software. It is incredibly valuable for identifying vulnerabilities within a specific application (e.g., finding a vulnerable version of Log4j inside an app). However, an SBOM is tied to a software product, not to the organization's entire asset inventory. A CMDB would contain or reference SBOMs for the software installed on its assets, making the CMDB the more comprehensive foundation for this goal.
Why D is Incorrect:
SLM (Service Level Management) or Service Level Agreement Management is a process for defining, measuring, and managing the quality of IT services against agreed-upon targets with customers (e.g., 99.9% uptime). It is a business and operational process, not a technical component or database used for asset inventory and vulnerability mapping.
Reference:
This question falls under Domain 2.0: Security Operations, specifically focusing on security tooling and technologies that support incident response and vulnerability management. It also touches on the IT infrastructure library (ITIL) framework, where the CMDB is a core concept. The ability to quickly identify affected assets is critical for reducing mean time to respond (MTTR) to security threats.
A systems administrator wants to reduce the number of failed patch deployments in an organization. The administrator discovers that system owners modify systems or applications in an ad hoc manner. Which of the following is the best way to reduce the number of failed patch deployments?
A. Compliance tracking
B. Situational awareness
C. Change management
D. Quality assurance
Explanation:
Why C is Correct:
The core problem identified is that "system owners modify systems or applications in an ad hoc manner." This is a classic lack of a formal change management process.
A change management process provides a structured approach for requesting, approving, implementing, and reviewing changes to IT systems.
This process ensures that all changes are documented, tested, approved, and communicated before they are made.
By implementing change management, the systems administrator would have a complete record of all modifications. When a new patch is ready for deployment, the team can review the change history to understand the current state of the system and anticipate any potential conflicts. This prevents situations where an unknown, ad-hoc modification causes a patch to fail.
Change management is the foundational IT practice designed specifically to prevent the chaos caused by unauthorized or unreported changes.
Why A is Incorrect:
Compliance tracking is about measuring and reporting on whether systems adhere to internal policies or external regulations (e.g., PCI DSS, HIPAA). While it might eventually detect that systems are out of compliance due to failed patches, it is a reactive, auditing function. It does not actively prevent the ad-hoc changes that are causing the deployment failures in the first place.
Why B is Incorrect:
Situational awareness is the ability to identify, process, and comprehend the critical information about what is happening in an environment. It is an outcome of good security practices and monitoring, not a specific process or control that prevents unauthorized changes. You need tools and processes (like change management) to achieve situational awareness.
Why D is Incorrect:
Quality assurance (QA) is a process for verifying that a product or service meets specified requirements. In software development, QA involves testing. While testing patches in a staging environment is a critical part of a change management process, QA alone is insufficient. QA does not control the process of making changes to production systems; it only tests the changes once they are proposed. The problem described is about the uncontrolled introduction of changes, not the quality of the changes themselves.
Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It tests the understanding of fundamental IT governance processes. Change management is a core component of ITIL and other frameworks and is directly aimed at bringing stability and predictability to IT environments by preventing unauthorized and unreported modifications.
A software development team requires valid data for internal tests. Company regulations, however do not allow the use of this data in cleartext. Which of the following solutions best meet these requirements?
A. Configuring data hashing
B. Deploying tokenization
C. Replacing data with null record
D. Implementing data obfuscation
Explanation:
Why D is Correct:
Data obfuscation (or data masking) is a technique specifically designed for this purpose. It creates a functional substitute for real data that is structurally similar but contains inauthentic information. This allows developers and testers to work with realistic-looking data sets without exposing any real sensitive information. The data remains "valid" for testing application logic, workflows, and database schemas because it preserves the format, type, and length of the original data, while the actual content is scrambled or replaced. This directly meets the requirement of not having real cleartext data in a non-production environment.
Why A is Incorrect:
Data hashing is a one-way cryptographic function. It is excellent for verifying data integrity (e.g., checking passwords) but is useless for providing valid test data. Hashed data loses all its original format and meaning. A developer cannot run meaningful tests on a database where every field is a hash value, as the application logic would fail. For example, a hashed first name field no longer contains letters and cannot be used to test a "search by name" feature.
Why B is Incorrect:
Tokenization is the process of replacing sensitive data with a non-sensitive equivalent, called a token, which has no extrinsic or exploitable meaning. The token is a random value that can be mapped back to the original data in a secure vault. While excellent for protecting data in production (e.g., credit card numbers), tokens are not "valid data" for testing. Like hashed data, a token is just a random string and does not preserve the format or logic of the original data, making it unsuitable for application testing.
Why C is Incorrect:
Replacing data with null records destroys the utility of the data set. A database full of null values is not "valid data for internal tests." It would be impossible to test most application features, as there would be no data to display, sort, filter, or manipulate. This solution fails the primary requirement of providing usable test data.
Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography and Domain 4.0: Governance, Risk, and Compliance. It addresses data protection methods and their appropriate use cases, specifically focusing on securing non-production environments to comply with data protection policies while maintaining development agility. Data obfuscation is a standard practice for creating safe, useful test environments.
A security configure is building a solution to disable weak CBC configuration for remote access connections lo Linux systems. Which of the following should the security engineer modify?
A. The /etc/openssl.conf file, updating the virtual site parameter
B. The /etc/nsswith.conf file, updating the name server
C. The /etc/hosts file, updating the IP parameter
D. The /etc/etc/sshd, configure file updating the ciphers
Explanation:
Why D is Correct:
The question specifies the goal is to "disable weak CBC configuration for remote access connections to Linux systems." The most common method for remote access to Linux systems is SSH (Secure Shell).
The configuration file for the SSH daemon (the service that accepts incoming SSH connections) is typically /etc/ssh/sshd_config.
Within this file, the Ciphers directive is used to specify which encryption algorithms (ciphers) the server will accept for a connection.
Cipher Block Chaining (CBC) mode ciphers (e.g., aes128-cbc, aes256-cbc) are considered weak and vulnerable to attacks like "SSH CBC information disclosure." To disable them, the security engineer would modify the sshd_config file to explicitly list only strong ciphers (e.g., Counter Mode ciphers like aes128-ctr, aes256-ctr, or modern algorithms like chacha20-poly1305@openssh.com), thereby removing any CBC-based options.
Why A is Incorrect:
The /etc/openssl.conf file (or similar OpenSSL configuration files) is used to configure the OpenSSL library itself, which provides cryptographic functions for many applications. However, it does not directly control the specific cipher suites offered by the SSH daemon. Modifying this would have a broad, system-wide impact and is not the precise tool for configuring SSH-specific access.
Why B is Incorrect:
The /etc/nsswitch.conf file (Name Service Switch configuration) controls how the system resolves various types of information like hostnames, users, and groups (e.g., using /etc/hosts, DNS, or LDAP). It has absolutely nothing to do with configuring encryption algorithms or remote access protocols.
Why C is Incorrect:
The /etc/hosts file is a simple static table for mapping hostnames to IP addresses. It is used for local name resolution and is unrelated to the encryption protocols or cipher suites used for network connections.
Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It tests the practical knowledge of hardening specific services (SSH) by modifying their configuration files to use only strong cryptographic settings, which is a core responsibility of a security engineer.
Which of the following AI concerns is most adequately addressed by input sanitation?
A. Model inversion
B. Prompt Injection
C. Data poisoning
D. Non-explainable model
Explanation:
Why B is Correct:
Prompt injection is a vulnerability specific to AI systems that use text-based prompts, particularly Large Language Models (LLMs). It occurs when an attacker crafts a malicious input (a "prompt") that tricks the model into ignoring its original instructions, bypassing safety filters, or revealing sensitive information. Input sanitation is a primary defense against this attack. It involves rigorously validating, filtering, and escaping all user-provided input before it is passed to the AI model. This helps to neutralize or render ineffective any malicious instructions embedded within the user's input, thereby preventing the model from being hijacked.
Why A is Incorrect:
Model inversion is an attack where an adversary uses the model's outputs (e.g., API responses) to reverse-engineer and infer sensitive details about the training data. This is addressed by controls on the output side (e.g., differential privacy, output filtering, limiting API response details) and model design, not by sanitizing the input prompts.
Why C is Incorrect:
Data poisoning is an attack on the training phase of an AI model. An attacker injects malicious or corrupted data into the training set to compromise the model's performance, integrity, or behavior after deployment. Defending against this requires securing the data collection and curation pipeline, using robust training techniques, and validating training data—measures that are completely separate from sanitizing runtime user input.
Why D is Incorrect:
A non-explainable model (often called a "black box" model) is a characteristic of certain complex AI algorithms where it is difficult for humans to understand why a specific decision was made. This is an inherent challenge of the model's architecture (e.g., deep neural networks) and is addressed by the field of Explainable AI (XAI), which involves using different models, tools, and techniques to interpret them. Input sanitation has no bearing on making a model's decisions more explainable.
Reference:
This question falls under the intersection of Domain 1.0: Security Architecture and emerging technologies. It tests the understanding of specific threats to AI systems and the appropriate security controls to mitigate them. Input validation/sanitation is a classic application security control that finds a new critical application in protecting AI systems from prompt injection attacks.
Which of the following best explains the business requirement a healthcare provider fulfills by encrypting patient data at rest?
A. Securing data transfer between hospitals
B. Providing for non-repudiation data
C. Reducing liability from identity theft
D. Protecting privacy while supporting portability.
Explanation:
Why D is Correct:
This option most accurately and completely captures the core business and regulatory requirements for a healthcare provider.
Protecting Privacy:
This is the primary driver. Regulations like HIPAA (Health Insurance Portability and Accountability Act) in the United States mandate the protection of patient Protected Health Information (PHI). Encryption of data at rest is a key safeguard to ensure confidentiality and privacy, preventing unauthorized access if devices are lost, stolen, or improperly accessed. It directly addresses the fundamental ethical and legal duty to keep patient information private.
Supporting Portability:
This is a critical business enabler. Healthcare data needs to be portable—it must be stored on laptops, mobile devices, USB drives, and in cloud data centers to facilitate modern healthcare delivery, backups, and research. Encryption is the technology that makes this portability secure. It allows data to be moved and stored flexibly without incurring the high risk of a data breach. The "portability" in HIPAA's name hints at this need for data movement in a secure manner.
Why A is Incorrect:
Encrypting data at rest protects data while it is stored on a device (e.g., a database, hard drive). Securing data transfer between hospitals is the role of encrypting data in transit (e.g., using TLS for network transmission). This is an important requirement, but it is not the one fulfilled by encryption at rest.
Why B is Incorrect:
Non-repudiation provides proof of the origin of data and prevents a sender from denying having sent it. This is a security service achieved through digital signatures and cryptographic hashing, not through encryption at rest. Encryption ensures confidentiality, not non-repudiation.
Why C is Incorrect:
While reducing liability from identity theft is a positive outcome of encrypting data, it is not the best explanation of the direct business requirement. The requirement is driven by proactive compliance with privacy laws (like HIPAA) and the duty of care to protect patients. Reducing liability is a beneficial consequence of meeting that primary requirement, not the requirement itself. Option D is a more precise and comprehensive description of the core business and regulatory need.
Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It tests the ability to map a technical control (encryption at rest) back to the fundamental business and legal requirements that mandate its use, particularly in a heavily regulated industry like healthcare. Understanding the "why" behind a control is crucial for a CASP+.
The material finding from a recent compliance audit indicate a company has an issue with excessive permissions. The findings show that employees changing roles or departments results in privilege creep. Which of the following solutions are the best ways to mitigate this issue? (Select two).
Setting different access controls defined by business area
A. Implementing a role-based access policy
B. Designing a least-needed privilege policy
C. Establishing a mandatory vacation policy
D. Performing periodic access reviews
E. Requiring periodic job rotation
D. Performing periodic access reviews
Explanation:
The core problem identified is privilege creep due to employees changing roles. This means users accumulate permissions over time because old access rights are not removed when they are no longer needed for their new position. The solutions must directly address this accumulation and ensure permissions align with current job functions.
Why A is Correct (Implementing a role-based access policy):
Role-Based Access Control (RBAC) is a fundamental solution to this exact problem. Instead of assigning permissions directly to users, permissions are assigned to roles (e.g., "Accountant," "Marketing Manager"). Users are then assigned to these roles. When an employee changes departments, their old role is simply removed, and their new role is assigned. This automatically revokes the old permissions and grants the new, appropriate ones, effectively preventing privilege creep by design.
Why D is Correct (Performing periodic access reviews):
Even with RBAC in place, processes can break down. Periodic user access reviews (also known as recertification) are a critical administrative control to catch and correct privilege creep. In these reviews, managers or system owners periodically attest to whether their employees' current access levels are still appropriate for their job functions. This process proactively identifies and removes excessive permissions that may have been missed during a role transition.
Why the Other Options Are Incorrect:
B. Designing a least-needed privilege policy:
While the principle of least privilege is the ultimate goal, this option describes a concept or principle, not an actionable solution to the problem of privilege creep. Implementing RBAC (Option A) is how you operationalize and enforce a least privilege policy. Therefore, A is a more direct and specific solution.
C. Establishing a mandatory vacation policy:
This is a detective control primarily used to uncover fraud (e.g., requiring an employee to take vacation forces someone else to perform their duties, potentially revealing fraudulent activity). It does not directly address the procedural issue of permissions not being removed during role changes.
E. Requiring periodic job rotation:
Job rotation is a security practice used to reduce the risk of fraud and collusion and to cross-train employees. It would actually exacerbate the problem of privilege creep, as more employees changing roles would lead to even more accumulated permissions if a proper process (like RBAC and access reviews) is not in place to manage the transitions.
Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It tests knowledge of identity and access management (IAM) best practices, specifically the controls used to implement and maintain the principle of least privilege and prevent authorization vulnerabilities like privilege creep. RBAC and access recertification are cornerstone practices for any mature IAM program.
Third parties notified a company's security team about vulnerabilities in the company's application. The security team determined these vulnerabilities were previously disclosed in third-party libraries. Which of the following solutions best addresses the reported vulnerabilities?
A. Using laC to include the newest dependencies
B. Creating a bug bounty program
C. Implementing a continuous security assessment program
D. Integrating a SASI tool as part of the pipeline
Explanation:
Why A is Correct:
The root cause of the vulnerabilities is that the application uses third-party libraries with known, publicly disclosed vulnerabilities. The most direct and effective solution is to update these dependencies to their latest, patched versions. Infrastructure as Code (IaC) is the best practice for automating and managing this process.
IaC tools (like Terraform, Ansible, or cloud-specific templates) allow developers to define the application's infrastructure and dependencies in code files.
These definitions can specify the exact versions of libraries to be used. To remediate, a team can update the version number in the IaC script and redeploy. This ensures consistency, repeatability, and speed in pushing the patched libraries across all environments (dev, test, prod).
This approach directly fixes the reported problem by replacing the vulnerable component with a secure one.
Why B is Incorrect:
A bug bounty program is a crowdsourced initiative to incentivize external security researchers to find and report unknown vulnerabilities. The vulnerabilities in this scenario are already known and were reported by third parties. A bug bounty might help find future unknown issues, but it does nothing to fix the current, known problem with the libraries.
Why C is Incorrect:
Implementing a continuous security assessment program (which might include SAST, DAST, etc.) is a broad and valuable practice for finding vulnerabilities. However, like a bug bounty, it is a detective control. It would help identify that the vulnerable libraries are present, but the team already knows this because they've been notified. The requirement is to address or fix the vulnerability, not just to find it again. The fix is to update the library.
Why D is Incorrect:
Integrating a SAST (Static Application Security Testing) tool into the pipeline is also a detective control. It scans source code for patterns that indicate vulnerabilities. While it could potentially detect the use of a vulnerable library if its rules are tuned for that, its primary function is to find flaws in custom code. More importantly, it identifies problems but does not remediate them. The remediation is still the action of updating the dependency, which is best managed through IaC.
In summary:
While options B, C, and D are all valuable parts of a mature application security program, they are focused on finding vulnerabilities. The problem stated is that vulnerabilities have already been found. The necessary action is to patch them. Using IaC to automate dependency management and deployment is the most effective way to execute that patch quickly and consistently.
Reference:
This question falls under Domain 2.0: Security Operations and Domain 1.0: Security Architecture. It addresses vulnerability management and the practical application of DevOps practices (like IaC) to ensure secure and consistent configurations across environments.
Asecuntv administrator is performing a gap assessment against a specific OS benchmark
The benchmark requires the following configurations be applied to endpomts:
• Full disk encryption
* Host-based firewall
• Time synchronization
* Password policies
• Application allow listing
* Zero Trust application access
Which of the following solutions best addresses the requirements? (Select two).
A. CASB
B. SBoM
C. SCAP
D. SASE
E. HIDS
D. SASE
Explanation:
The question requires selecting solutions that best help an administrator apply and enforce a specific set of OS security configurations (like disk encryption, firewall settings, etc.) across endpoints. The goal is to close the gap between the current state and the desired benchmark.
Why C is Correct (SCAP):
The Security Content Automation Protocol (SCAP) is a suite of standards specifically designed for this exact task. It allows for:
Automated Compliance Checking:
SCAP-compliant tools can automatically scan an endpoint (using benchmarks like CIS or DISA STIGs) and check its configuration against hundreds of required settings (firewall rules, password policies, time sync, etc.).
Remediation:
Many SCAP tools can not only identify misconfigurations but also automatically remediate them to bring the system into compliance.
Standardized Benchmarks:
The requirements listed (firewall, time sync, password policies) are classic configuration items that are defined in SCAP benchmarks. SCAP is the industry standard for automating technical compliance and hardening.
Why D is Correct (SASE):
Secure Access Service Edge (SASE) is a cloud architecture that converges networking and security functions. It directly addresses two requirements from the list:
Zero Trust application access:
This is a core principle of SASE. It ensures users and devices are authenticated and authorized before granting access to applications, regardless of their location, which fulfills the "Zero Trust application access" requirement.
Host-based firewall (extension):
While SASE provides a cloud-delivered firewall, it can also help enforce security policies that complement or supersede the need for a host-based firewall by applying consistent security at the network edge.
SASE provides a framework to enforce these policies consistently across all endpoints.
Why the Other Options Are Incorrect:
A. CASB (Cloud Access Security Broker):
A CASB is primarily focused on securing access to cloud applications (SaaS) and enforcing security policies between users and the cloud. It does not manage OS-level configurations on endpoints like disk encryption, host firewalls, or time synchronization.
B. SBoM (Software Bill of Materials):
An SBoM is an inventory of components in a software product. It is used for vulnerability management in the software supply chain (e.g., finding vulnerable libraries). It is completely unrelated to configuring operating system settings on an endpoint.
E. HIDS (Host-Based Intrusion Detection System):
A HIDS monitors a host for signs of malicious activity and policy violations. It is a detective control. While it might alert on a misconfiguration, it is not the tool used to apply the required configurations from a benchmark. SCAP is the tool for applying the configuration; a HIDS might monitor for changes to that configuration afterward.
Reference:
This question falls under Domain 2.0: Security Operations and Domain 1.0: Security Architecture. It tests the knowledge of specific security technologies and their appropriate application for system hardening, compliance automation (SCAP), and modern secure access principles (SASE).
A company wants to install a three-tier approach to separate the web. database, and application servers A security administrator must harden the environment which of the following is the best solution?
A. Deploying a VPN to prevent remote locations from accessing server VLANs
B. Configuring a SASb solution to restrict users to server communication
C. Implementing microsegmentation on the server VLANs
D. installing a firewall and making it the network core
Explanation:
Why C is Correct:
The core requirement is to harden a three-tier architecture (web, app, database servers). The fundamental security principle for this architecture is to enforce strict communication paths:
Web servers should only talk to application servers.
Application servers should only talk to database servers.
Direct communication from web servers to database servers, or from external sources to app/database servers, should be blocked.
Microsegmentation is the ideal solution for this. It involves creating fine-grained, granular security policies (often at the workload or individual server level) to control east-west traffic (traffic between servers within the data center). This allows the administrator to create exact rules that only permit the necessary communication between the specific tiers and block everything else, drastically reducing the attack surface.
Why A is Incorrect:
A VPN secures communication to the network from remote users or sites. It is designed for securing north-south traffic (traffic entering or leaving the data center). It does nothing to control the east-west traffic between the server tiers, which is the primary concern in hardening this architecture.
Why B is Incorrect:
A SASE (Secure Access Service Edge) solution is also primarily focused on north-south traffic. It provides secure, identity-driven access for users to applications and services, regardless of their location. It is not the right tool for controlling traffic between servers inside the data center.
Why D is Incorrect:
While installing a firewall is a good general practice, simply making it the "network core" is a vague and outdated concept. A traditional core firewall is often not granular enough to effectively segment traffic between tiers at a micro level. Modern data centers require more agile and granular controls that can be applied directly to the workloads, which is what microsegmentation provides (often using host-based firewalls or software-defined networking security policies).
Reference:
This question falls under Domain 1.0: Security Architecture. It tests the understanding of data center security design, specifically the best practices for securing a multi-tier application architecture by controlling east-west traffic through advanced segmentation techniques like microsegmentation.
Page 2 out of 11 Pages |
CAS-005 Practice Test |