CompTIA SK0-005 Practice Test
Prepare smarter and boost your chances of success with our CompTIA SK0-005 Practice test. This test helps you assess your knowledge, pinpoint strengths, and target areas for improvement. Surveys and user data from multiple platforms show that individuals who use SK0-005 practice exam are 40–50% more likely to pass on their first attempt.
Start practicing today and take the fast track to becoming CompTIA SK0-005 certified.
14930 already prepared
Updated On : 3-Nov-2025493 Questions
4.8/5.0
Topic 1: Exam Set A
A systems administrator needs to configure a new server and external storage for a new
production application environment. Based on end-user specifications, the new solution
needs to adhere to the following basic requirements:
1. The OS must be installed in a separate disk partition. In case of hard drive failure, it
cannot be affected.
2. Application data IOPS performance is a must.
3. Data availability is a high priority, even in the case of multiple hard drive failures.
Which of the following are the BEST options to comply with the user requirements?
(Choose three.)
A. Install the OS on a RAID 0 array.
B. Install the OS on a RAID 1 array.
C. Configure RAID 1 for the application data.
D. Configure RAID 5 for the application data.
E. Use SSD hard drives for the data application array.
F. Use SATA hard drives for the data application array.
G. Use a single JBOD for OS and application data.
D. Configure RAID 5 for the application data.
E. Use SSD hard drives for the data application array.
Explanation:
1. OS must be installed in a separate disk partition — unaffected by hard drive failure
The OS must be isolated from application data disks.
Even if application data drives fail, the OS partition must remain intact.
This rules out any configuration where OS and data share the same disks (e.g., JBOD or single array).
Best choice: RAID 1 (mirroring) for the OS disks — provides fault tolerance with two mirrored drives.
→ B. Install the OS on a RAID 1 array ✓
Why not A?
A. RAID 0 (striping) has no redundancy — one drive failure = total data loss, including the OS. Fails requirement #1.
2. Application data IOPS performance is a must
IOPS = Input/Output Operations Per Second
SSDs deliver 10x–100x higher IOPS than traditional HDDs.
For performance-critical applications, SSD is mandatory.
→ E. Use SSD hard drives for the data application array ✓
Why not F?
F. SATA hard drives (typically spinning HDDs) have low IOPS (~100–200 IOPS vs. 10,000+ for SSD). Fails requirement #2.
3. Data availability is high priority — even with multiple hard drive failures
Must survive more than one drive failure → requires RAID with parity or multi-level redundancy.
RAID 5: Tolerates 1 drive failure.
RAID 6: Tolerates 2 drive failures (but not an option here).
RAID 5 is the only option that provides any fault tolerance beyond RAID 0/1.
→ D. Configure RAID 5 for the application data
(Best available choice under given options)
Why not C?
C. RAID 1 for application data → Only 2 drives, survives 1 failure, but:
Wastes 50% capacity
Lower write performance than RAID 5 (due to mirroring overhead)
Does not scale beyond 2 drives
→ Less efficient and performant than RAID 5 for application data.
Why not G?
G. JBOD = Just a Bunch Of Disks → No redundancy.
One failure = data loss. Fails requirement #3 completely.
Reference:
CompTIA Server+ SK0-005 Exam Objectives
1.3: Compare and contrast RAID levels
2.1: Configure storage (RAID, drive types, performance)
SNIA Storage Fundamentals:
SSDs = high IOPS; RAID 5 = parity + performance; RAID 1 = mirror
Microsoft & VMware Best Practices:
OS on RAID 1, application data on RAID 5/6 with SSDs
A server technician is installing a new server OS on legacy server hardware. Which of the following should the technician do FIRST to ensure the OS will work as intended?
A. Consult the HCL to ensure everything is supported.
B. Migrate the physical server to a virtual server.
C. Low-level format the hard drives to ensure there is no old data remaining.
D. Make sure the case and the fans are free from dust to ensure proper cooling.
Explanation
When installing a new operating system (OS) on legacy (older) hardware, the single most critical first step is to verify compatibility. Legacy hardware may not have drivers for a newer OS, or the new OS may have dropped support for older hardware components.
The HCL (Hardware Compatibility List) is a document provided by the OS vendor (like Microsoft or a Linux distribution) that lists the specific hardware models, components, and device drivers that have been tested and certified to work with that version of the operating system.
By consulting the HCL first, the technician can:
Confirm that the server's motherboard chipset, network card, storage controller, and other critical components are supported.
Identify the need for specific or legacy device drivers that may need to be downloaded before the installation.
Avoid a situation where the OS installation fails midway or the system is unstable after installation due to incompatible hardware.
Performing this check first prevents wasted time and potential hardware damage from an attempted installation that is destined to fail.
Why the Other Options Are Incorrect
B. Migrate the physical server to a virtual server:
This is a solution for a different problem, such as server consolidation or improving disaster recovery. It is not a step in installing a new OS on existing physical hardware and would be done after the OS is stable, not before.
C. Low-level format the hard drives to ensure there is no old data remaining:
A standard high-level format during the OS installation process is sufficient to prepare the drives. A low-level format is an outdated practice for modern drives and is unnecessary for ensuring OS functionality. It is also a very time-consuming process that should not be done until compatibility is confirmed.
D. Make sure the case and the fans are free from dust to ensure proper cooling:
While this is an important general maintenance task for any server, it is not the first step for ensuring the OS works as intended. If the hardware is not on the HCL, a clean server will still fail to install or run the OS properly. This is a preparatory step that should be done, but only after compatibility has been verified.
Reference
This question falls under Domain 1.0:
Server Administration, specifically addressing the planning and preparation stages of an OS deployment. A core part of server installation and maintenance is ensuring compatibility between hardware and software to guarantee stability and performance.
Key Takeaway:
Always start a new OS installation, especially on legacy hardware, by verifying compatibility with the vendor's HCL. This proactive step saves significant time and troubleshooting effort by preventing an installation on unsupported hardware.
Winch of the following is a type of replication in which all files are replicated, all the time?
A. Constant
B. Application consistent
C. Synthetic full
D. Full
Explanation:
In data replication, the goal is to ensure that files or data sets are copied and synchronized between systems — such as between a primary server and a backup or disaster recovery site.
Different replication types determine how often and what portion of the data gets replicated.
Constant Replication
Constant replication (also known as real-time or continuous replication) means that all files are replicated continuously, all the time, as changes occur.
Every file change on the source system is immediately mirrored to the destination system.
This provides the lowest recovery point objective (RPO) — meaning virtually no data loss if the primary system fails.
Commonly used in high-availability and disaster recovery solutions.
Why the Other Options Are Incorrect:
B. Application Consistent
Refers to replication or backups that capture data in a state consistent with the running application (e.g., databases or mail servers), ensuring data integrity.
It’s about data state quality, not replication frequency.
Does not mean “all files replicated all the time.”
C. Synthetic Full
A backup term, not replication.
A synthetic full backup combines previous incremental backups with an existing full backup to create a new full backup without reading all source data again.
Not continuous replication.
D. Full
A full replication or backup copies all files at one time, but only during that scheduled operation — not continuously.
Not "all the time"; it’s a periodic process.
Reference:
CompTIA Server+ SK0-005 Exam Objective:
3.3 – Summarize replication and backup methods and concepts.
Vendor Documentation:
Microsoft, VMware, and Veeam refer to “continuous data replication” or “real-time replication” as a constant replication model.
Summary:
The type of replication where all files are replicated continuously and in real time is known as constant replication.
Which of the following describes the installation of an OS contained entirely within another OS installation?
A. Host
B. Bridge
C. Hypervisor
D. Guest
Explanation:
A guest operating system is an OS that is installed and runs entirely within another operating system (the host). This is achieved through virtualization, where a hypervisor creates an isolated virtual machine (VM) environment. The guest OS believes it has full control of hardware, but it is actually using virtualized resources provided by the host OS (Type 2 hypervisor) or directly by the hypervisor on bare metal (Type 1).
Example:
Host OS: Windows 11
Hypervisor: VMware Workstation, VirtualBox, or Parallels (Type 2)
Guest OS: Ubuntu Linux running inside a VM on Windows 11
The entire guest OS installation (bootloader, kernel, file system, applications) resides in virtual disk files (e.g., .vmdk, .vdi) within the host file system.
Why the other options are incorrect:
A. Host
The host is the base operating system that runs the virtualization software. It is not the OS installed within another — it is the one containing the guest.
B. Bridge
Bridge refers to a networking mode in virtualization (bridged networking), where the VM gets its own IP on the physical network. It is not a term for an OS installation.
C. Hypervisor
The hypervisor is the software or firmware layer (e.g., VMware ESXi, Microsoft Hyper-V, KVM) that enables virtualization. It manages VMs but is not the guest OS itself.
Reference:
CompTIA Server+ SK0-005 Exam Objectives – Domain 4.0 Virtualization, Objective 4.1:
“Explain the purpose of virtualization including… host vs. guest, hypervisor types…”
VMware Official Glossary:
“Guest operating system: An operating system that runs inside a virtual machine.”
Microsoft Hyper-V Documentation:
“A virtual machine runs a guest operating system isolated from the host.”
A system administrator has been alerted to a zero-day vulnerability that is impacting a service enabled on a server OS. Which of the following would work BEST to limit an attacker from exploiting this vulnerability?
A. Installing the latest patches
B. Closing open ports
C. Enabling antivirus protection
D. Enabling a NIDS
Explanation
A "zero-day vulnerability" is a software flaw that is unknown to the vendor or for which no official patch or fix is available. Since no patch exists, traditional remediation steps are not an option at the moment.
The question specifies the vulnerability is "impacting a service enabled on a server OS." This means the vulnerable code is running as a network service (e.g., a web server, file server, or database) that listens on one or more specific network ports.
The most direct and immediate way to prevent an attacker from exploiting this vulnerability over the network is to stop the service from being accessible. This is achieved by closing the network port(s) that the service uses.
How it works:
By closing the port (e.g., via a firewall rule), you block all external communication attempts to the vulnerable service. An attacker can no longer reach the service to deliver the malicious payload that triggers the vulnerability. This effectively contains the threat until a permanent patch from the vendor becomes available.
Why the Other Options Are Incorrect
A. Installing the latest patches:
By definition, a patch for a zero-day vulnerability does not yet exist. It is "zero-day" because the vendor has had zero days to create, test, and release a fix since it became publicly known. This is the ideal long-term solution, but it is impossible as a first step.
C. Enabling antivirus protection:
Antivirus (AV) software is primarily focused on file-based threats like viruses, worms, and Trojans. It relies on signature databases to detect known malware. A zero-day exploit may not have a known signature, and if the attack happens over the network without writing a malicious file (e.g., a memory-based exploit), the AV may never see it. It is a good layer of defense but is not the best or most direct way to block this specific network-based attack.
D. Enabling a NIDS:
A Network Intrusion Detection System (NIDS) monitors network traffic for suspicious activity and known attack patterns. While it might be able to alert you to an exploitation attempt, it does not block the attack in real-time. An Intrusion Prevention System (IPS) can block, but a NIDS only detects. Furthermore, like AV, it relies on signatures or heuristics, which may not yet be tuned to detect a brand-new zero-day exploit.
Reference
This question falls under Domain 5.0: Security, specifically addressing:
5.2: Given a scenario, apply physical security methods and concepts. (Logical network controls like firewalls are part of this).
5.4: Implement proper environmental controls and techniques. This includes incident response and mitigation procedures.
Key Takeaway:
In an incident response scenario involving an active, unpatched vulnerability, the immediate goal is to contain the threat. The most effective containment for a network service vulnerability is to isolate it from the network by disabling the service or blocking its port at the firewall. This is a fundamental principle of mitigating threats when a direct patch is not available.
A server administrator wants to run a performance monitor for optimal system utilization. Which of the following metrics can the administrator use for monitoring? (Choose two.)
A. Memory
B. Page file
C. Services
D. Application
E. CPU
F. Heartbeat
E. CPU
Explanation:
When a server administrator wants to run a performance monitor to ensure optimal system utilization, the key is to track hardware resource usage metrics that directly affect performance — primarily CPU and memory.
Monitoring these two metrics helps identify bottlenecks, resource exhaustion, and capacity planning needs.
A. Memory
Memory (RAM) usage is a critical performance metric.
Monitoring it helps identify:
Memory leaks or inefficient processes consuming excess RAM.
When the system starts paging to disk (indicating low memory availability).
Consistent high memory usage can slow system performance and signal the need for more RAM or application optimization.
E. CPU
CPU utilization shows how much processing power is being used at any given time.
High CPU usage over time can indicate:
Overloaded processors
Inefficient applications or background tasks
The need for CPU upgrades or load balancing
Keeping CPU usage within an optimal range ensures the server can handle workloads efficiently.
Why the Other Options Are Incorrect:
B. Page File
Page file activity is a symptom, not a core metric.
It can indicate memory pressure but isn’t typically a primary performance metric on its own.
It’s secondary to monitoring actual memory usage.
C. Services
Services refer to background processes.
Monitoring which services are running is useful for troubleshooting, but not for performance utilization metrics.
Not a standard performance counter.
D. Application
You can monitor specific applications, but that’s not a system-wide performance metric.
Performance monitoring focuses on hardware resources (CPU, memory, disk, network).
Too narrow in scope.
F. Heartbeat
Heartbeat monitoring checks whether a system or cluster node is alive or reachable, not how well it performs.
Used for availability, not performance.
Reference:
CompTIA Server+ SK0-005 Exam Objective:
3.1 – Given a scenario, use appropriate monitoring tools and techniques.
Microsoft Performance Monitor (PerfMon):
Common counters include % Processor Time, Available MBytes, Memory Pages/sec, and Disk Queue Length.
Summary:
To monitor optimal system utilization, administrators should track CPU and Memory metrics — the two most critical indicators of overall server performance.
A server administrator is exporting Windows system files before patching and saving them
to the following location:
\\server1\ITDept\
Which of the following is a storage protocol that the administrator is MOST likely using to
save this data?
A. eSATA
B. FCoE
C. CIFS
D. SAS
Explanation
The administrator is saving Windows system files to a network location specified using the Universal Naming Convention (UNC) path: \\server1\ITDept\. This format is the standard way Windows operating systems identify and access shared folders over a local network.
C. CIFS (Common Internet File System) (Correct):
CIFS is the name often used interchangeably with the Server Message Block (SMB) protocol, particularly in older contexts. SMB/CIFS is the native, application-layer protocol that Microsoft Windows uses for network file and print sharing. When an administrator uses a UNC path like \\server1\sharename, the client computer automatically initiates a connection using the SMB/CIFS protocol to authenticate and transfer files over the network. Since this involves exporting Windows system files to a standard Windows network share, CIFS/SMB is the protocol being used.
Why the Other Options are Incorrect
A. eSATA (External Serial Advanced Technology Attachment):
This is a local hardware interface used to connect external storage enclosures directly to a server. It provides high-speed, direct block-level access but is strictly a physical connection and not a network protocol used for file sharing.
B. FCoE (Fibre Channel over Ethernet):
This is a specialized network storage protocol used primarily in Storage Area Networks (SANs). It encapsulates Fibre Channel traffic inside standard Ethernet frames to provide block-level storage access over a network. This is used for mounting a large block of storage (like a LUN) to a server, not for simple file-level sharing to a UNC path.
D. SAS (Serial Attached SCSI):
This is a local hardware interface designed for high-performance connectivity between controllers and internal storage devices (HDDs, SSDs) within a server chassis. Like eSATA, it is an internal physical connection, not a network protocol.
In summary, the use of a UNC path is the definitive clue that a network file-sharing protocol is in use, and for a Windows environment, that protocol is SMB/CIFS.
Reference
This question relates to the Networking and Storage domains, specifically addressing the method of accessing file services. Understanding the distinction between network file sharing protocols (like CIFS/SMB, NFS, or FTP) and local storage interfaces (like SATA, SAS, or eSATA) is a core competency for server administrators.
Alter rack mounting a server, a technician must install four network cables and two power cables for the server. Which of the following is the MOST appropriate way to complete this task?
A. Wire the four network cables and the two power cables through the cable management arm using appropriate-length cables.
B. Run the tour network cables up the left side of the rack to the top of the rack switch. Run the two power cables down the right side of the rack toward the UPS.
C. Use the longest cables possible to allow for adjustment of the server rail within the rack.
D. Install an Ethernet patch panel and a PDU to accommodate the network and power cables.
Explanation:
After rack-mounting a server on sliding rails, the cable management arm (CMA) is the standard, industry-best practice for routing both power and network cables. The CMA is a foldable, articulated arm that attaches to the rear of the server and extends/retracts as the server slides in and out for service. It prevents cable strain, pinching, or disconnection during maintenance.
Key reasons this is the MOST appropriate method:
Serviceability – Technicians can fully extend the server without unplugging any cables.
Cable protection – Prevents cables from being crushed, stretched, or snagged.
Neatness & airflow – Keeps cables organized and out of the hot/cold aisle.
Appropriate-length cables – Use exact-length cables (not too long, not too short) to avoid loops or tension.
Example: Measure from NIC/PSU → switch/PDU with server fully extended → add 10–20% slack.
Real-world example (Dell PowerEdge with CMA):
4 × Cat6 cables from onboard NICs → rear of CMA → switch
2 × C13 power cables from dual PSUs → rear of CMA → PDU
All cables secured with Velcro ties every 6–8 inches inside the CMA.
Why the other options are incorrect:
B. Run the four network cables up the left side… power down the right
This is a valid vertical cable routing strategy for fixed equipment, but not for a sliding server. Without a CMA, extending the server will pull, stretch, or disconnect cables — violating serviceability and safety.
C. Use the longest cables possible…
Long, looping cables cause:
Signal degradation (especially >100m Cat6)
Clutter & airflow blockage
Trip/snag hazards
Increased EMI
→ Directly contradicts TIA-942 and CompTIA datacenter standards.
D. Install an Ethernet patch panel and a PDU…
While patch panels and PDUs are standard in racks:
They are infrastructure components, not a cable routing method for the server itself.
This does not address how to connect the server safely while allowing rail movement.
Still requires a CMA or proper strain relief.
Reference:
CompTIA Server+ SK0-005 Exam Objectives – Domain 1.0 Server Hardware, Objective 1.2:
“Given a scenario, install and maintain server hardware components including… cable management…”
Dell EMC PowerEdge Installation Guide:
“Always route power and data cables through the cable management arm (CMA) to support full extension of the slide rails.”
HPE Server User Guide:
“Use the CMA to manage cables and prevent damage during service.”
TIA-942 Datacenter Standard:
Recommends CMAs for rail-mounted servers.
A technician is trying to determine the reason why a Linux server is not communicating on
a network. The returned network configuration is as follows:
eth0: flags=4163
inet 127.0.0.1 network 255.255.0.0 broadcast 127.0.0.1
Which of the following BEST describes what is happening?
A. The server is configured to use DHCP on a network that has multiple scope options
B. The server is configured to use DHCP, but the DHCP server is sending an incorrect subnet mask
C. The server is configured to use DHCP on a network that does not have a DHCP server
D. The server is configured to use DHCP, but the DHCP server is sending an incorrect MTU setting
Explanation
The key to diagnosing this problem lies in the IP address assigned to the eth0 interface: inet 127.0.0.1.
The IP Address 127.0.0.1:
This is the loopback address. It is a special address that a computer uses to communicate with itself. It is not a routable address on a physical network.
How DHCP Works:
When a client configured for DHCP cannot find a DHCP server on the network to obtain an IP address, it automatically assigns itself an Automatic Private IP Addressing (APIPA) address. On Linux systems, this fallback mechanism often results in an address in the 169.254.0.0/16 range.
What Happened Here:
The output shows the interface is UP and RUNNING, meaning it's active. However, it has been assigned the loopback address 127.0.0.1. This is a classic symptom of a system that was configured for DHCP but failed to contact a DHCP server. As a fallback or due to a misconfiguration during the failure, it has reverted to a purely local configuration, essentially making the physical network adapter behave like a loopback interface. It has not received a valid, routable IP address for the network.
Let's analyze the other options based on this understanding:
A. The server is configured to use DHCP on a network that has multiple scope options:
If there were multiple DHCP scopes, the server would still receive a valid IP address from one of them (e.g., 192.168.1.10). It would not get 127.0.0.1.
B. The server is configured to use DHCP, but the DHCP server is sending an incorrect subnet mask:
Even with an incorrect subnet mask, the server would still receive a valid IP address from the DHCP server (just with the wrong netmask). It would not be 127.0.0.1.
D. The server is configured to use DHCP, but the DHCP server is sending an incorrect MTU setting:
An incorrect MTU could cause performance issues or packet fragmentation, but the server would still get a valid IP address from the DHCP server. The MTU shown in the output is the standard 1500, and the issue is the IP address, not the MTU.
Reference
This question falls under Domain 4.0: Networking, specifically addressing:
4.2: Given a scenario, configure servers to use network infrastructure services.
4.4: Given a scenario, troubleshoot network connectivity issues.
It tests the fundamental knowledge of the DHCP process and how to interpret the symptoms of its failure using command-line output.
Conclusion:
The evidence clearly points to a complete failure to obtain any lease from a DHCP server, resulting in the interface being assigned a non-routable, self-referential address. This is best described by the scenario where the server is looking for a DHCP server that does not exist on the network.
Which of the following server types would benefit MOST from the use of a load balancer?
A. DNS server
B. File server
C. DHCP server
D. Web server
Explanation:
A load balancer is used to distribute incoming network or application traffic across multiple servers to ensure availability, reliability, and scalability.
The type of server that benefits the most from a load balancer is one that handles a high number of simultaneous client requests — such as a web server.
D. Web Server – Correct Answer
Web servers (e.g., Apache, Nginx, IIS) often handle thousands or millions of client requests for websites or applications.
A load balancer:
Distributes traffic evenly among multiple web servers.
Ensures high availability — if one server fails, another takes over automatically.
Provides scalability by allowing new servers to be added easily.
Improves performance by preventing any single server from becoming overloaded.
Examples:
Cloud environments (AWS, Azure, GCP) use load balancers to handle website traffic spikes.
Enterprises use Layer 7 (application-level) load balancing to manage HTTPS requests.
Most suitable for load balancing.
Why the Other Options Are Incorrect:
A. DNS Server
DNS servers already use round-robin DNS or anycast routing to balance query loads.
These mechanisms are simpler and built into DNS itself — traditional load balancers aren’t typically needed.
Only benefits slightly, not the most.
B. File Server
File servers manage centralized file storage via SMB, NFS, etc.
They rely more on shared storage systems and redundancy, not load balancing.
Synchronizing multiple file servers introduces complexity.
Not ideal for load balancing.
C. DHCP Server
DHCP servers assign IP addresses dynamically.
Redundancy is provided via failover or split-scope configurations, not load balancing.
They process relatively low traffic volume compared to web servers.
Load balancing is unnecessary.
Reference:
CompTIA Server+ SK0-005 Exam Objective:
3.2 – Summarize server roles and their purpose.
3.4 – Explain high availability and load balancing concepts.
NIST SP 800-44 v2 (Web Security Guidelines):
Recommends load balancing for high-traffic web servers to improve performance and fault tolerance.
Summary:
Among all listed options, web servers handle the highest client load and benefit most directly from load balancing to ensure uptime and responsiveness.
| Page 8 out of 50 Pages |
| SK0-005 Practice Test | Previous |