CompTIA SK0-005 Practice Test

Prepare smarter and boost your chances of success with our CompTIA SK0-005 Practice test. This test helps you assess your knowledge, pinpoint strengths, and target areas for improvement. Surveys and user data from multiple platforms show that individuals who use SK0-005 practice exam are 40–50% more likely to pass on their first attempt.

Start practicing today and take the fast track to becoming CompTIA SK0-005 certified.

14930 already prepared
Updated On : 3-Nov-2025
493 Questions
4.8/5.0

Page 10 out of 50 Pages

Think You're Ready?

Your Final Exam Before the Final Exam.
Dare to Take It?

Topic 1: Exam Set A

A technician wants to limit disk usage on a server. Which of the following should the technician implement?

A. Formatting

B. Compression

C. Disk quotas

D. Partitioning

C.   Disk quotas

Explanation:

The technician needs to restrict how much disk space users or groups can consume on the server. Disk quotas are the feature built for exactly that purpose. They let an administrator set a maximum amount of storage per user (or group) on a given volume, with options for hard limits (which block further writes once reached) or soft limits (which only warn).

In Windows Server, quotas are configured via File Server Resource Manager (FSRM) or directly on NTFS volumes.
In Linux, the quota package and tools like edquota or setquota enforce limits on filesystems such as ext4 or XFS.

This directly meets the requirement of limiting disk usage.

Why the other options are incorrect:

A. Formatting
Formatting creates a filesystem and prepares the disk for use, but it does not impose any usage restrictions on users.

B. Compression
Compression reduces the size of files to save space overall, but it does not prevent individual users from consuming more than their fair share of storage.

D. Partitioning
Partitioning divides a physical disk into separate logical sections for organization or isolation, but it does not control or limit per-user consumption within a partition.

Reference:
CompTIA Server+ (SK0-005) Exam Objectives
1.4 – Given a scenario, perform basic server configuration tasks
→ Covers storage management, including the use of disk quotas.

Microsoft Learn:
Quota Management

Linux:
man quota, man edquota – standard tools for user and group quota management.

Bottom Line:
To limit disk usage per user, implement disk quotas.

A server technician has received reports of database update errors. The technician checks the server logs and determines the database is experiencing synchronization errors. To attempt to correct the errors, the technician should FIRST ensure:

A. the correct firewall zone is active

B. the latest firmware was applied

C. NTP is running on the database system

D. the correct dependencies are installed

C.   NTP is running on the database system

Explanation

In a distributed database system or any system where multiple servers must coordinate database updates, time synchronization is critical.

Here's why:

Why C is Correct:
Database synchronization often relies on timestamps to determine the order of transactions and resolve conflicts. If the clocks on the different database servers are out of sync, one server may record a transaction with an earlier timestamp than a transaction it has already received from another server. This creates a logical conflict, leading to synchronization errors. The Network Time Protocol (NTP) ensures all servers in the cluster use a single, consistent source of time, which is a fundamental prerequisite for successful synchronization. Therefore, this is the most logical and impactful first step.

Why A is Incorrect:
While an incorrect firewall zone could potentially block communication between database nodes, the symptom described is specific "synchronization errors," not a complete communication failure. The logs would typically indicate a connection timeout or refusal if a firewall were the primary issue. Checking NTP is a more targeted first step for this specific error.

Why B is Incorrect:
Applying the latest firmware is a general maintenance task and is unlikely to be the direct cause of or solution for a sudden onset of synchronization errors. Firmware updates are also a disruptive process that should be planned and tested; they are not a first step in emergency troubleshooting for a software-level synchronization issue.

Why D is Incorrect:
If the correct dependencies were not installed, the database service would likely fail to start altogether, rather than run with intermittent synchronization errors. This problem would have been apparent from the initial setup and not something that would start being reported suddenly unless a new software patch was applied (which is not indicated in the scenario).

Reference
This question falls under the CompTIA Server+ (SK0-005) exam objective 4.2 Given a scenario, use the appropriate hardware tools and software tools to maintain the server. More specifically, it relates to troubleshooting methodology where you must identify the problem (synchronization errors) and establish a theory of probable cause (time drift being a common cause for sync issues). The best practice is to always check the most fundamental requirements (like time sync) before moving to more complex potential causes.

Which of the following is an example of load balancing?

A. Round robin

B. Active-active

C. Active-passive

D. Failover

A.   Round robin

Explanation:

Load balancing is a method used in server environments to distribute workloads evenly across multiple systems or network links to ensure no single server becomes a bottleneck. The goal is to maximize performance, improve redundancy, and increase fault tolerance by efficiently sharing the traffic load.

Let’s review each option carefully:

A. Round robin

Round robin is one of the most common load balancing algorithms used in both DNS and network load balancers.
In this method, incoming requests are distributed to servers sequentially — for example, Server1 receives the first request, Server2 the next, Server3 the next, and then the cycle repeats.
This ensures a balanced workload across all servers, promoting high availability and resource efficiency.
Round robin is simple to implement and works best when all servers have similar capacity and performance levels.
It can be used in various environments such as web farms, application clusters, and database load distribution.

B. Active-active

In an active-active configuration, multiple servers or nodes run simultaneously and share the workload.
While this setup can involve load balancing, the term “active-active” itself refers to a redundancy configuration type rather than a specific load balancing method.
It describes the operational state of servers rather than the algorithm used to distribute traffic.

C. Active-passive

Active-passive configurations are primarily used for failover and redundancy.
One server (active) handles requests while the other (passive) remains idle until the active server fails.
This setup ensures high availability but does not balance load between systems since only one node is active at a time.

D. Failover

Failover refers to the process of transferring workloads to a backup system if the primary system fails.
It is a fault-tolerance mechanism, not a method of distributing or balancing load under normal operation.
Failover systems are designed for continuity rather than equal load distribution.

Reference:
CompTIA Server+ (SK0-005) Official Exam Objectives, Domain 2.2:
“Summarize the concepts of high availability and load balancing.”

CompTIA Server+ Official Study Guide (Exam SK0-005), Chapter 8:
“High Availability and Load Balancing Techniques.”

Discusses algorithms such as round robin, least connections, and weighted balancing as core load balancing techniques.

NIST SP 800-215, “Guide to Information Technology Security Services”, provides context on redundancy, failover, and load balancing distinctions.

Summary:
Round robin is a clear example of a load balancing method because it actively distributes incoming requests among multiple servers in a predictable rotation. While active-active, active-passive, and failover focus on redundancy and fault tolerance, round robin directly addresses workload distribution, which is the core principle of load balancing.

A technician is unable to access a server’s package repository internally or externally. Which of the following are the MOST likely reasons? (Choose two.)

A. The server has an architecture mismatch

B. The system time is not synchronized

C. The technician does not have sufficient privileges

D. The external firewall is blocking access

E. The default gateway is incorrect

F. The local system log file is full

D.   The external firewall is blocking access
E.   The default gateway is incorrect

Explanation

Here is a breakdown of why these two options are the most likely causes for both internal and external access failure to a package repository:

D. The external firewall is blocking access:

External Access: This is a classic reason for external connection failure. A package repository typically resides on a server that needs to be accessible over the internet (or across network zones). If a firewall (network appliance) is configured to deny traffic on the ports used by the repository protocol (e.g., HTTP/HTTPS on port 80/443, or a custom port for the specific package manager like Yum, APT, etc.), then the connection will fail.

Internal Access: While the term "external firewall" usually refers to the perimeter firewall, sometimes an internal network may have internal firewalls (segmentation/zone firewalls) that could be misconfigured, or the issue could be a host-based firewall (running on the repository server itself) that is blocking traffic from both internal and external sources. A general firewall misconfiguration is a very common cause of connectivity failure.

E. The default gateway is incorrect:

External Access: The default gateway is the path out of the local network segment. If the server (or the client trying to reach it) has an incorrect default gateway, it cannot send traffic to any destination outside of its immediate local network segment, which includes reaching any external (internet) or remote internal package repository.

Internal Access: In larger networks, even communication with an "internal" repository that resides on a different subnet (which is common) requires the use of the default gateway to route the packets. An incorrect gateway would prevent the server from reaching repositories on other internal subnets.

Why the Other Options are Less Likely

A. The server has an architecture mismatch:
This would cause the installation of a package to fail (e.g., trying to install an $\text{x86-64}$ package on an ARM server), but it would not prevent the server from connecting to and accessing the package repository's index/metadata files.

B. The system time is not synchronized:
Time synchronization issues can cause problems with SSL/TLS certificates (if the package repository uses $\text{HTTPS}$), leading to a connection refusal. While possible for external access, it's generally a secondary issue compared to a fundamental network routing or blocking problem, and it's less likely to stop all access (internal and external) unless the time is wildly wrong.

C. The technician does not have sufficient privileges:
This would prevent the technician from managing packages or changing system configurations, but it would not typically prevent the server itself (or another client) from reaching the package repository to download files. Package access itself usually relies on network connectivity, not user privileges.

F. The local system log file is full:
A full log file can prevent new entries from being written, but it has no direct mechanism to interfere with a server's ability to initiate a network connection and download package data.

Reference
This question aligns with the CompTIA Server+ (SK0-005) exam objectives:
4.2 Given a scenario, use the appropriate hardware tools and software tools to maintain the server. (Using tools to check network configuration and connectivity).
4.3 Given a scenario, troubleshoot common hardware, storage, network, and security issues. This is a classic network troubleshooting scenario where you must diagnose problems with connectivity (default gateway) and security policies (firewall). The methodology is to start at the network layer (checking IP configuration and routing) before moving to higher-level issues.

An administrator is investigating a physical server mat will not Boot into the OS. The server has three hard drives configured in a RAID 5 array. The server passes POST, out the OS does not load. The administrator verities the CPU and RAM are Doth seated correctly and checks the dual power supplies. The administrator then verifies all the BIOS settings are correct and connects a bootable USB drive in the server, and the OS loads correctly. Which of the following is causing the issue?

A. The page file is too small.

B. The CPU has failed.

C. There are multiple failed hard drives.

D. There are mismatched RAM modules.

E. RAID 5 requires four drives

C.   There are multiple failed hard drives.

Explanation:

The server passes POST and boots successfully from a USB drive, which proves:

CPU, RAM, motherboard, and power supplies are functional.
BIOS/UEFI settings are correct.
The boot process itself works when given a valid boot device.

However,** the internal OS does not load** from the RAID 5 array.
RAID 5 can tolerate only one drive failure. If two or more drives in the three-drive array have failed:

The array enters a degraded or failed state.
The RAID controller may still detect the array during POST (no beep/error if misconfigured), but cannot reconstruct the data needed to boot.
The boot loader (e.g., GRUB, Windows Boot Manager) is missing or corrupted → OS fails to load.
Booting from USB bypasses the failed RAID array entirely, which explains why that works.

Why the other options are incorrect:

A. The page file is too small.
A small page file causes performance issues or crashes after the OS loads — not a complete failure to boot into the OS.

B. The CPU has failed.
A failed CPU would prevent POST from completing. Here, POST passes and USB boot works → CPU is fine.

D. There are mismatched RAM modules.
Mismatched RAM may cause instability or POST errors (beeps, halts).
USB boot success confirms RAM is seated and functional enough for boot.

E. RAID 5 requires four drives.
False. RAID 5 requires a minimum of three drives. It works with 3, 4, 5, or more.
This is a common myth — RAID 5 is valid with three drives.

Reference: CompTIA Server+ (SK0-005) Exam Objectives
2.3 – Given a scenario, install and maintain server hardware components → Includes understanding RAID levels and failure tolerance.
RAID 5: (n-1) parity, 1 drive failure tolerance.

RAID Failure Behavior:
1 failed drive → degraded mode (still boots).
2 failed drives → array failure, data inaccessible.

Bottom Line:
The RAID 5 array has two or more failed drives, making the boot volume inaccessible.

A server administrator is deploying a new server that has two hard drives on which to install the OS. Which of the following RAID configurations should be used to provide redundancy for the OS?

A. RAID 0

B. RAID 1

C. RAID 5

D. RAID 6

B.   RAID 1

Explanation

The goal is to install the operating system on two hard drives and ensure redundancy. Redundancy means that if one drive fails, the system can continue to operate without data loss or downtime.

Why B is Correct (RAID 1): RAID 1 is known as disk mirroring. In this configuration, data is written identically to both drives simultaneously, creating a perfect copy (a "mirror"). If one of the two drives fails, the other drive continues to function with a complete copy of all data, including the OS. This provides excellent redundancy and is a very common, simple, and effective configuration for boot drives.

Why the Other Options are Incorrect:

A. RAID 0 (Striping):
This configuration splits data across both drives to improve performance. However, it offers no redundancy. If either drive fails, all data is lost, and the server will crash. This is the opposite of what is required.

C. RAID 5 (Striping with Distributed Parity):
RAID 5 requires a minimum of three drives. It provides redundancy by using parity data distributed across all drives. Since the scenario only provides two drives, RAID 5 is not an option.

D. RAID 6 (Striping with Double Distributed Parity):
RAID 6 requires a minimum of four drives. It is similar to RAID 5 but can withstand the failure of two drives. It is also not possible with only two drives.

Reference
This question directly tests your knowledge of fundamental RAID levels, which is a key part of the CompTIA Server+ (SK0-005) exam objective:
3.3 Given a scenario, configure and maintain server components and storage.

Understanding the minimum drive requirements and the trade-offs between performance, capacity, and redundancy for different RAID levels (0, 1, 5, 6, 10) is essential for the exam and for real-world server administration. For a two-drive OS deployment requiring redundancy, RAID 1 is the standard and correct choice.

A technician is connecting a server’s secondary NIC to a separate network. The technician connects the cable to the switch but then does not see any link lights on the NIC. The technician confirms there is nothing wrong on the network or with the physical connection. Which of the following should the technician perform NEXT?

A. Restart the server

B. Configure the network on the server

C. Enable the port on the server

D. Check the DHCP configuration

C.   Enable the port on the server

Explanation:

When a network interface card (NIC) is connected to a switch and no link lights appear, but the technician has already verified that the network and physical cabling are functioning properly, the issue most likely resides on the server side.
In many servers, especially those with multiple NICs, secondary NICs may be disabled by default in the operating system or in the server’s BIOS/UEFI. If the NIC is disabled, it will not initialize the link, and no link lights will appear even if the cable and switch port are fine.
Therefore, the next step is to check the NIC status in the server’s operating system or BIOS and enable the port if it is disabled. Once enabled, the link lights should come on, and the interface can then be configured with the proper network settings.

Option Analysis:

A. Restart the server – Incorrect
Restarting may reinitialize hardware, but it does not solve the underlying issue if the NIC is disabled. The correct troubleshooting step is to confirm the NIC’s status first.

B. Configure the network on the server – Incorrect
Network configuration (IP address, DNS, gateway, etc.) comes after the NIC is enabled and the link is active. Since there are no link lights, configuration changes will not help yet.

C. Enable the port on the server – Correct
This is the logical next step. The NIC might be disabled either in the OS (e.g., Windows Device Manager, Linux ip link set eth1 up) or at the BIOS/firmware level. Enabling it will allow link detection and communication.

D. Check the DHCP configuration – Incorrect
DHCP issues occur after the NIC is active and the link is established. Since there are no link lights, the NIC isn’t even communicating with the network yet, so DHCP is irrelevant at this stage.

Reference:
CompTIA Server+ (SK0-005) Exam Objectives, Domain 1.1: Install physical hardware and configure components.

CompTIA Server+ Official Study Guide (Exam SK0-005), Chapter 5: Network Configuration and Troubleshooting.

CompTIA Troubleshooting Methodology: Verify physical connections → Check device status → Enable/activate hardware → Configure settings → Test connectivity.

Summary:
Since the network and cabling have already been verified as functional, the absence of link lights indicates that the NIC itself is disabled. The technician should enable the port on the server before proceeding with any further configuration steps.

A server administrator needs to harden a server by only allowing secure traffic and DNS inquiries. A port scan reports the following ports are open:

A. 21

B. 22

C. 23

D. 53

E. 443

F. 636

B.   22
D.   53
E.   443

Explanation

The requirement is to only allow secure traffic and DNS inquiries. We must identify which ports are associated with secure protocols and which are used for DNS.

Why B is Correct (Port 22):
Port 22 is used by the SSH (Secure Shell) protocol. SSH provides a secure, encrypted channel for remote command-line login and command execution. This qualifies as "secure traffic."

Why D is Correct (Port 53):
Port 53 is used by the DNS (Domain Name System) protocol. This port is essential for handling DNS inquiries, which translate domain names (like google.com) into IP addresses. While DNS can be secured (e.g., with DNSSEC), the protocol itself operates on port 53, and the requirement explicitly allows for "DNS inquiries."

Why E is Correct (Port 443):
Port 443 is used by HTTPS (HTTP over TLS/SSL). This is the primary protocol for secure web traffic and is the very definition of "secure traffic" for web services.

Why the Other Options are Incorrect and Should Be Closed:

A. Port 21 (FTP):
This is the control port for File Transfer Protocol (FTP). FTP sends data, including usernames and passwords, in clear text. It is not secure and should be disabled in favor of a secure alternative like SFTP (which runs over SSH on port 22) or FTPS.

C. Port 23 (Telnet):
This is used by the Telnet protocol. Like FTP, Telnet transmits all data, including login credentials, in clear text. It is highly insecure and must be disabled. SSH (port 22) is the secure replacement for Telnet.

F. Port 636 (LDAPS):
This is used for LDAP over SSL (LDAPS), which is the secure version of the Lightweight Directory Access Protocol. While this is a secure protocol, it is not mentioned in the requirements. The administrator only needs to allow "secure traffic" (a general term) and "DNS inquiries." Since LDAP is not required for the server's stated function, its port should be closed as part of the hardening process to reduce the attack surface. If the server were an LDAP directory server, then this port would need to be open, but that is not stated here.

Reference
This question falls under the CompTIA Server+ (SK0-005) exam objective:

2.2 Given a scenario, apply server hardening methods.

A key part of server hardening is knowing the function and security posture of common network ports and disabling any non-essential services. This practice, known as reducing the attack surface, is a fundamental security principle.

An administrator is troubleshooting a RAID issue in a failed server. The server reported a drive failure, and then it crashed and would no longer boot. There are two arrays on the failed server: a two-drive RAIO 0 set tor the OS, and an eight-drive RAID 10 set for data. Which of the following failure scenarios MOST likely occurred?

A. A drive failed in the OS array.

B. A drive failed and then recovered in the data array.

C. A drive failed in both of the arrays.

D. A drive failed in the data array.

A.   A drive failed in the OS array.

Explanation:

In this scenario, the key detail is that the server reported a drive failure and then crashed, becoming unbootable.

There are two separate RAID arrays:

A two-drive RAID 0 set for the operating system (OS)
An eight-drive RAID 10 set for data

Let’s analyze what happens in each case:

RAID 0 (OS array)
RAID 0 (striping) offers no redundancy — it improves performance by splitting data evenly across multiple drives but does not tolerate any disk failures.
If any single drive in a RAID 0 array fails, all data in that array is lost, since parts of every file are distributed between both drives.
Since the OS is installed on this array, the system will fail to boot after a drive failure.

RAID 10 (data array)
RAID 10 (striped mirrors) combines RAID 1 and RAID 0, offering both redundancy and performance.
It can tolerate at least one drive failure per mirrored pair without data loss or system crash.
A failure here would affect data availability but not necessarily cause the system to fail to boot, since the OS resides on a separate RAID 0 array.
Therefore, the most likely cause of the server becoming unbootable after a drive failure is a drive failure in the OS RAID 0 array.

Option Analysis:

A. A drive failed in the OS array – Correct
The RAID 0 array hosting the OS has no fault tolerance, so a single drive failure makes the OS unbootable.

B. A drive failed and then recovered in the data array – Incorrect
A recovered drive would not cause a server crash, especially since the OS is hosted elsewhere.

C. A drive failed in both of the arrays – Incorrect
Possible, but less likely. The problem description points to a single drive failure causing a boot issue, consistent with the RAID 0 array failure.

D. A drive failed in the data array – Incorrect
The data array (RAID 10) can tolerate a single drive failure, and the OS should still boot normally.

Reference:
CompTIA Server+ (SK0-005) Exam Objectives, Domain 1.3: “Summarize storage solutions, concepts, and technologies.”
CompTIA Server+ Official Study Guide (Exam SK0-005), Chapter 4: RAID Types and Troubleshooting RAID Failures.
RAID Level Characteristics (CompTIA):

RAID 0: No redundancy, high performance, fails completely with one drive loss.
RAID 10: Redundant, can survive one or more drive failures depending on which disks fail.

Summary:
Because RAID 0 offers no fault tolerance, a single drive failure in the OS RAID 0 array would immediately render the array — and therefore the operating system — completely inaccessible, causing the server to crash and fail to boot.

A user cannot save large files to a directory on a Linux server that was accepting smaller files a few minutes ago. Which of the following commands should a technician use to identify the issue?

A. pvdisplay

B. mount

C. df -h

D. fdisk -l

C.   df -h

Explanation

The symptom is that a user cannot save large files to a directory on a Linux server, even though smaller files were just accepted. The most common cause for this behavior is that the filesystem has run out of available space or inodes.

Why C is Correct:

The df -h command is used to report filesystem disk space usage.
df stands for "disk free."
The -h flag presents the output in a "human-readable" format (using MB, GB, etc.), making it easy to quickly see how much space is used and available on each mounted filesystem.
This command will immediately show if the partition where the user's directory is located is full or nearly full, which would prevent writing large files while potentially still allowing very small files.

Why the Other Options are Incorrect:

A. pvdisplay:
This command is used to display information about Physical Volumes in an LVM (Logical Volume Manager) setup. While LVM is a common way to manage disk space, this command is too low-level and specific for the initial diagnosis. It shows LVM metadata, not the simple, direct "free space" information needed to solve this problem. df -h is the universal first step.

B. mount:
This command, without any options, shows a list of currently mounted filesystems. While it tells you what is mounted and where, it provides no information about how much disk space is free or used. It is not the right tool for diagnosing a space issue.

D. fdisk -l:
This command lists the partition table for all disks. It shows how the disk is partitioned (e.g., sizes of partitions like /dev/sda1, /dev/sda2), but it does not show how much of each partition's allocated space is actually being used by the filesystem. It tells you the total size of the partition, not the available free space within it.

Reference
This question aligns with the CompTIA Server+ (SK0-005) exam objective:

4.2 Given a scenario, use the appropriate hardware tools and software tools to maintain the server.

A critical part of server administration is using the correct command-line tools for troubleshooting. The df -h command is the standard and first tool any administrator should use when encountering "no space left on device" errors or the inability to write files. For a more in-depth look, if df -h shows free space, the next step would be to check for inode exhaustion using df -i.

Page 10 out of 50 Pages
SK0-005 Practice Test Previous