CompTIA CV0-004 Practice Test

Prepare smarter and boost your chances of success with our CompTIA CV0-004 Practice test. This test helps you assess your knowledge, pinpoint strengths, and target areas for improvement. Surveys and user data from multiple platforms show that individuals who use CV0-004 practice exam are 40–50% more likely to pass on their first attempt.

Start practicing today and take the fast track to becoming CompTIA CV0-004 certified.

12560 already prepared
Updated On : 3-Nov-2025
256 Questions
4.8/5.0

Page 2 out of 26 Pages

Think You're Ready?

Your Final Exam Before the Final Exam.
Dare to Take It?

Which of the following is a difference between a SAN and a NAS?

A. A SAN works only with fiber-based networks.

B. A SAN works with any Ethernet-based network.

C. A NAS uses a faster protocol than a SAN

D. A NAS uses a slower protocol than a SAN.

B.   A SAN works with any Ethernet-based network.

Summary
The core difference between a Storage Area Network (SAN) and Network Attached Storage (NAS) lies in how they present storage to a system. A SAN provides block-level storage, appearing to the server as a raw, unformatted hard drive. In contrast, a NAS provides file-level storage, presenting itself as a network file share. This fundamental difference dictates the protocols and networks they use, not an inherent speed advantage of one over the other.

Correct Option

B. A SAN works with any Ethernet-based network.
This is the correct differentiating statement. Modern SANs are not limited to Fibre Channel networks. Technologies like iSCSI and FCoE (Fibre Channel over Ethernet) allow SANs to operate over standard Ethernet networks, making them more accessible and cost-effective.

This contrasts with the outdated notion that SANs are exclusively fiber-based, highlighting a key evolution and a valid difference in their network compatibility.

Incorrect Options

A. A SAN works only with fiber-based networks.
This is incorrect and outdated. While Fibre Channel is a high-performance, dedicated network for SANs, it is not the only option. iSCSI is a very common protocol that allows SAN block storage to be delivered over standard IP/Ethernet networks.

C. A NAS uses a faster protocol than a SAN and D. A NAS uses a slower protocol than a SAN.
Both of these are incorrect generalizations. Speed is not determined by whether a device is a SAN or NAS, but by the underlying hardware, network technology (e.g., 10Gb Ethernet vs. 16Gb Fibre Channel), and protocol efficiency. A high-end NAS on a fast network can outperform a low-end SAN, and vice-versa. The primary difference is the type of storage (file vs. block), not its inherent speed.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 2.1 - Given a scenario, configure and deploy cloud storage solutions. This objective requires understanding different storage types, including block (SAN) and file (NAS) storage, and their respective network protocols and use cases. Knowing that SANs can operate over Ethernet via iSCSI is a key part of this knowledge.

A critical security patch is required on a network load balancer in a public cloud. The organization has a major sales conference next week, and the Chief Executive Officer does not want any interruptions during the demonstration of an application behind the load balancer. Which of the following approaches should the cloud security engineer take?

A. Ask the management team to delay the conference.

B. Apply the security patch after the event.

C. Ask the upper management team to approve an emergency patch window.

D. Apply the security patch immediately before the conference.

B.   Apply the security patch after the event.

Summary
The scenario presents a conflict between an urgent security requirement (applying a critical patch) and a critical business requirement (zero application interruption during a major sales conference). The security engineer must balance these competing needs by following a formal process that documents the risk, proposes a solution, and seeks approval from the appropriate business leadership, who are ultimately responsible for accepting any potential risk.

Correct Option

B. Apply the security patch after the event.
This is the most prudent course of action. A critical sales demonstration is a time-sensitive business event where stability is paramount.

Applying any patch, even a security one, carries an inherent risk of introducing instability or causing a service interruption. The consequence of a failed patch during the demo far outweighs the risk of a short, known delay in its application.

This approach follows the principle of risk management and change control by scheduling the maintenance for an approved time (after the event) to avoid impacting a critical business function.

Incorrect Options

A. Ask the management team to delay the conference.
This is not a realistic or reasonable request. A major sales conference involves significant logistics, client commitments, and revenue potential. Delaying it for a single patch is a disproportionate response that disregards business priorities.

C. Ask the upper management team to approve an emergency patch window.
While following a formal approval process is good, an "emergency" patch window implies immediate action is required. Given the CEO's explicit directive for no interruptions, requesting an emergency change before the event would likely be denied and shows poor judgment in prioritizing risks.

D. Apply the security patch immediately before the conference.
This is the highest-risk option. Applying a patch right before a critical event provides no time for validation or rollback if it causes issues. It directly violates the CEO's clear instruction and could lead to a demonstration failure, with severe business consequences.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 4.3 - Given a scenario, use the appropriate tools and processes to update systems in the cloud. This objective includes understanding change management processes and risk assessment. A core principle is evaluating the business impact of a change and scheduling it to avoid disrupting critical business operations, which is the definitive factor in this scenario.

A company's engineering department is conducting a month-long test on the scalability of an in-house-developed software that requires a cluster of 100 or more servers. Which of the following models is the best to use?

A. PaaS

B. SaaS

C. DBaaS

D. laaS

D.   laaS

Summary
The engineering department needs to test scalability on a large, temporary cluster of 100+ servers for in-house software. This requires maximum control over the operating system and software environment to install, configure, and run their custom application. The cloud model must provide raw, on-demand compute infrastructure without managing the underlying hardware, allowing for rapid provisioning and decommissioning of a large number of virtual machines.

Correct Option

D. IaaS (Infrastructure as a Service)
IaaS provides the fundamental building blocks of computing: virtual machines, storage, and networking. This gives the engineering team full control over the OS and the entire software stack.

It is ideal for temporary, large-scale projects like this scalability test because it allows the company to rapidly spin up 100+ servers on demand, run the test for a month, and then terminate the instances, paying only for the resources used.

This model offers the flexibility and environmental control needed to test in-house-developed software without the constraints of a pre-configured platform.

Incorrect Options

A. PaaS (Platform as a Service)
PaaS provides a platform for developing, running, and managing applications without the complexity of building and maintaining the underlying infrastructure. It abstracts away the OS and middleware. This limits the team's control and is designed for application deployment, not for infrastructure-level scalability testing of custom software.

B. SaaS (Software as a Service)
SaaS delivers a fully functional, end-user application over the internet (e.g., Gmail, Salesforce). The company has no control over the infrastructure, platform, or application code. It is completely unsuitable for running in-house developed software.

C. DBaaS (Database as a Service)
DBaaS is a specialized subset of PaaS that provides a managed database engine (e.g., Amazon RDS, Azure SQL Database). It only manages the database layer and cannot be used to host a custom application's general compute logic across 100 servers.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 1.1 - Given a scenario, analyze the attributes of a cloud model and solution to meet business requirements. This objective requires understanding the different cloud service models (IaaS, PaaS, SaaS). IaaS is specifically chosen for scenarios requiring granular control over the OS and network for temporary, custom, or scalable workloads.

A cloud service provider requires users to migrate to a new type of VM within three months. Which of the following is the best justification for this requirement?

A. Security flaws need to be patched.

B. Updates could affect the current state of the VMs.

C. The cloud provider will be performing maintenance of the infrastructure.

D. The equipment is reaching end of life and end of support.

D.   The equipment is reaching end of life and end of support.

The best justification for a cloud service provider requiring users to migrate to a new type of VM within a specific time frame is that the equipment is reaching end of life and end of support (EOL/EOS). This means that the older type of VM will no longer receive updates or support, which could include important security patches, so it is necessary to move to newer VM types to maintain security and performance. References: CompTIA Cloud+ Study Guide (Exam CV0-004) by Todd Montgomery and Stephen Olson

Following a ransomware attack, the legal department at a company instructs the IT administrator to store the data from the affected virtual machines for a minimum of one year. Which of the following is this an example of?

A. Recoverability

B. Retention

C. Encryption

D. Integrity

B.   Retention

Summary
The legal department has issued a specific instruction to preserve data for a defined period (one year). This is not about protecting the data from alteration, making it recoverable, or encrypting it. This is a directive to keep the data in a secure and unaltered state for a specific duration, typically for legal, regulatory, or forensic investigation purposes following a security incident.

Correct Option

B. Retention
Data retention refers to the policies and practices for how long data must be kept and preserved. The one-year minimum timeframe specified by the legal department is a classic data retention requirement.

In the context of a ransomware attack, this is often done to preserve forensic evidence for legal proceedings, internal investigations, or to meet regulatory compliance obligations.

The instruction is explicitly about the duration of storage, which is the core of a retention policy.

Incorrect Options

A. Recoverability
Recoverability involves the processes and capabilities to restore data and systems after an outage or attack (e.g., from backups). The legal instruction is about keeping the compromised data, not about restoring clean data to resume operations.

C. Encryption
Encryption is a security control used to protect the confidentiality of data by converting it into an unreadable format. The legal mandate does not concern how the data is protected during storage, only that it is stored for the required period.

D. Integrity
Integrity ensures that data is accurate, trustworthy, and has not been altered from its original state. The ransomware attack has already compromised the integrity of the data. The legal hold is about preserving the data in its current (altered) state as evidence, not guaranteeing its accuracy.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 5.2 - Given a scenario, apply security controls and compliance requirements to cloud resources. This objective includes implementing data retention policies and legal holds as part of compliance and incident response procedures. A legal instruction to preserve data after an attack is a direct example of enforcing a retention policy.

A company serves customers globally from its website hosted in North America. A cloud engineer recently deployed new instances of the website in the Europe region. Which of the following is the most likely reason?

A. To simplify workflow

B. To enhance security

C. To reduce latency

D. To decrease cost

C.   To reduce latency


Summary
The scenario involves a global customer base accessing a website from a single origin region (North America). The decision to deploy new instances in Europe is a strategic infrastructure change aimed at improving the experience for a specific geographical segment. The primary technical benefit of deploying resources closer to end-users is a reduction in the time it takes for data to make a round trip, which is a key performance metric for web services.

Correct Option

C. To reduce latency
Latency is the delay experienced when data travels over a network. Physical distance is a major contributor to latency.

By hosting instances in Europe, the company is placing its website infrastructure much closer to its European users. This significantly shortens the network path that data must travel, resulting in faster page load times and a more responsive experience for those users.

This is a standard practice known as geographic distribution or deploying to edge locations, and its primary goal is to reduce latency for a global audience.

Incorrect Options

A. To simplify workflow
Deploying and managing infrastructure across multiple regions typically increases operational complexity. It introduces challenges like data synchronization, consistent configuration management, and cross-region networking, which complicate the workflow rather than simplify it.

B. To enhance security
While certain regional deployments can be motivated by data sovereignty laws, the question does not mention security or legal compliance as a driver. The core problem being solved is performance for a global audience, not a security flaw or requirement.

D. To decrease cost
Running duplicate infrastructure in a second region inherently increases costs due to additional data transfer fees and the recurring expense of the instances themselves. While it might reduce costs for European users in terms of their bandwidth, the overall cloud bill for the company will increase, not decrease.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 1.5 - Given a scenario, analyze the solution design to meet business requirements. This objective includes analyzing requirements for performance and availability. Deploying resources in multiple regions to reduce latency for end-users is a fundamental design principle covered under performance optimization.

A developer at a small startup company deployed some code for a new feature to its public repository. A few days later, a data breach occurred. A security team investigated the incident and found that the database was hacked. Which of the following is the most likely cause of this breach?

A. Database core dump

B. Hard-coded credentials

C. Compromised deployment agent

D. Unpatched web servers

B.   Hard-coded credentials

Summary
The key detail is that code for a new feature was deployed to a public repository. Public repositories are accessible to anyone on the internet. If the developer accidentally included sensitive information like database passwords, connection strings, or API keys directly within the code (a common practice known as hard-coding), this information would be exposed to the entire world, providing a direct pathway for an attacker to access the database.

Correct Option

B. Hard-coded credentials
Hard-coding credentials involves embedding usernames, passwords, or API keys directly into the application's source code. This is a severe security anti-pattern. When this code is committed to a public repository, these secrets become permanently exposed and easily discoverable by automated bots that constantly scan public repos for such information.

An attacker can use these exposed credentials to directly connect to and compromise the associated database, which is the most likely cause given the sequence of events described.

Incorrect Options

A. Database core dump
A core dump is a file containing a program's memory at a specific time, often generated after a crash. While it could contain sensitive data, it is not typically caused by deploying code to a repository. It is an operational artifact, not a direct result of a public code push.

C. Compromised deployment agent
A deployment agent is a tool or service that automates the process of deploying code. While it could be a attack vector, the scenario specifically points to the action of pushing code to a public repo as the preceding event, making exposed credentials in that code a more direct and likely cause.

D. Unpatched web servers
Unpatched web servers are a common cause of breaches, but the timeline and specific action in the question (deploying code to a public repo) do not directly point to a missing OS or web server patch. The link between the code deployment and the breach is much more strongly explained by exposed credentials.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 5.2 - Given a scenario, apply security controls and compliance requirements to cloud resources. This objective includes implementing secure development practices. A fundamental control is to never store secrets or credentials in plaintext within source code, especially code that may be exposed in a public repository, to prevent exactly this type of data breach.

A security team recently hired multiple interns who all need the same level of access. Which of the following controls should the security team implement to provide access to the cloud environment with the least amount of overhead?

A. MFA

B. Discretionary access

C. Local user access

D. Group-based access control

D.   Group-based access control

Summary
The requirement is to efficiently manage access for multiple users (interns) who all require the same permissions. The goal is to minimize administrative overhead, which refers to the time and effort required to assign, update, and revoke access. The optimal solution is to use a centralized access control method where permissions are assigned once to a collective entity, and users are then added to or removed from that entity.

Correct Option

D. Group-based access control
Group-based access control is the most efficient method for this scenario. The security team can create a single group (e.g., "Interns") and assign all the necessary permissions to this group.

Each intern's user account is then simply added as a member of this group. This provides a "one-to-many" relationship, drastically reducing the overhead of managing individual user accounts.

If permissions need to change, the security team updates the group policy once, and the change automatically applies to all members.

Incorrect Options

A. MFA (Multi-Factor Authentication)
MFA is a critical security control that verifies a user's identity by requiring multiple forms of evidence. However, it is an authentication mechanism, not an authorization system. It confirms who the user is but does not determine what they are allowed to access. Implementing MFA does not, by itself, assign permissions to the interns.

B. Discretionary Access Control (DAC)
In a DAC model, access to resources is granted at the discretion of the data or resource owner. This is a decentralized and granular model that would require each resource owner to manually grant permissions to each intern. This would create significant administrative overhead, which is the opposite of what is required.

C. Local User Access
Creating local user accounts on individual systems or cloud resources is the least scalable and most overhead-intensive method. The team would have to create and manage separate credentials and permissions for each intern on every resource they need to access, which is highly inefficient and error-prone.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 2.4 - Given a scenario, apply the appropriate security configurations and compliance controls. This objective includes implementing Identity and Access Management (IAM) policies. Using groups (role-based or group-based access control) to assign permissions to users is a fundamental and recommended practice for efficient and scalable access management in a cloud environment.

A company's content management system (CMS) service runs on an laaS cluster on a public cloud. The CMS service is frequently targeted by a malicious threat actor using DDoS. Which of the following should a cloud engineer monitor to identify attacks?

A. Network flow logs

B. Endpoint detection and response logs

C. Cloud provider event logs

D. Instance syslog

A.   Network flow logs

Summary
A Distributed Denial-of-Service (DDoS) attack aims to overwhelm a service's network capacity or resources with a flood of traffic from multiple sources. To identify this type of attack, the cloud engineer needs visibility into the volume, sources, and patterns of the network traffic entering and leaving the service. This requires monitoring data that shows all the network connections and data packets at the infrastructure level.

Correct Option

A. Network flow logs
Network flow logs (such as VPC Flow Logs in AWS or NSG Flow Logs in Azure) capture metadata about the IP traffic going to and from network interfaces in a VPC. They are the primary tool for identifying a DDoS attack because they show traffic patterns, including source/destination IPs, ports, packet counts, and—most importantly—traffic volume over time.

A sudden, massive spike in incoming traffic from thousands of disparate IPs would be clearly visible in these logs, allowing for rapid identification of the attack.

Incorrect Options

B. Endpoint detection and response logs
EDR logs focus on activity within an operating system or on a specific endpoint, such as process creation, file changes, and registry modifications. While crucial for detecting malware or intrusion, they are not designed to capture the broad network-level traffic patterns that characterize a DDoS attack.

C. Cloud provider event logs
These logs (like AWS CloudTrail or Azure Activity Log) record API calls and management actions made on your cloud account. They are essential for auditing and security, showing who did what to your resources, but they do not contain the network packet data needed to detect a traffic flood.

D. Instance syslog
The syslog is the general logging system within a specific virtual machine's operating system. It records OS and application events. A DDoS attack might eventually cause high CPU or errors visible here, but syslog provides a limited, internal view and is not the best source for initially detecting the external flood of network traffic itself.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 4.2 - Given a scenario, analyze monitoring metrics and alerts to ensure performance and availability. This objective includes using network monitoring tools to identify performance issues and security threats. Network flow logs are the standard cloud-native tool for gaining visibility into network traffic patterns and detecting anomalies like DDoS attacks.

A cloud engineer is running a latency-sensitive workload that must be resilient and highly available across multiple regions. Which of the following concepts best addresses these requirements?

A. Cloning

B. Clustering

C. Hardware passthrough

D. Stand-alone container

B.   Clustering

Summary
The workload has two critical requirements: high availability (resilience against failures) and low latency across multiple regions. The solution must create a single, unified system out of resources in different locations, allowing them to work together and automatically fail over if one component fails, all while maintaining fast response times for users in various geographic locations.

Correct Option

B. Clustering
Clustering involves grouping multiple servers (or nodes) across different availability zones or regions to work together as a single system.


This configuration provides high availability through automatic failover; if one node in the cluster fails, another node can immediately take over its workload, ensuring continuous service.

To address the latency-sensitive requirement, a global cluster can be deployed with nodes in multiple regions, allowing user traffic to be routed to the closest, lowest-latency node while still being part of the resilient, coordinated system.

Incorrect Options

A. Cloning
Cloning creates an identical copy of a virtual machine or system. While this is useful for scaling or creating backups, a simple clone is an independent system. Without clustering software to manage them, clones do not automatically provide failover or work together as a single, highly available service.

C. Hardware passthrough
This is a feature that gives a VM direct access to physical hardware devices (like GPUs or NICs) to improve performance. It does not provide any inherent high availability or multi-region resilience. In fact, it can complicate failover because the VM becomes tied to specific physical hardware.

D. Stand-alone container
A single, stand-alone container is the opposite of a highly available solution. It represents a single point of failure. If the underlying host fails, the container and its workload become unavailable. It has no built-in mechanism for cross-region resilience.

Reference
CompTIA Cloud+ (CV0-004) Exam Objectives: 3.2 - Given a scenario, implement appropriate disaster recovery and high availability techniques. This objective covers implementing clustering to provide high availability and fault tolerance for critical workloads, which is the definitive solution for meeting the requirements of resilience and availability across multiple regions.

Page 2 out of 26 Pages
CV0-004 Practice Test