CompTIA SK0-005 Practice Test

Prepare smarter and boost your chances of success with our CompTIA SK0-005 Practice test. This test helps you assess your knowledge, pinpoint strengths, and target areas for improvement. Surveys and user data from multiple platforms show that individuals who use SK0-005 practice exam are 40–50% more likely to pass on their first attempt.

Start practicing today and take the fast track to becoming CompTIA SK0-005 certified.

14930 already prepared
Updated On : 3-Nov-2025
493 Questions
4.8/5.0

Page 13 out of 50 Pages

Think You're Ready?

Your Final Exam Before the Final Exam.
Dare to Take It?

Topic 1: Exam Set A

Which of the following actions should a server administrator take once a new backup scheme has been configured?

A. Overwrite the backups

B. Clone the configuration

C. Run a restore test

D. Check the media integrity

C.   Run a restore test

Explanation:

After configuring a new backup scheme, the most important next step is to run a restore test to verify that the backup process works correctly and that the data can actually be recovered. A backup is only as good as its ability to restore data when needed.

Performing a restore test allows the administrator to confirm that:

The backup files are complete and uncorrupted.
The restoration process functions as intended.
The correct data sets, configurations, and permissions are recoverable.
The organization meets its Recovery Time Objective (RTO) and Recovery Point Objective (RPO) targets.
This validation step helps identify configuration issues, missing files, or media problems before an actual disaster occurs. It also ensures compliance with data protection policies and business continuity requirements.

Why the other options are incorrect:

A. Overwrite the backups:
Overwriting backups immediately after configuring a new scheme could destroy previous valid backup data before confirming the new one works properly.

B. Clone the configuration:
Cloning the configuration might be useful for redundancy but doesn’t verify whether the new backup setup is functioning correctly.

D. Check the media integrity:
Checking media integrity ensures that storage media (like tapes or disks) are usable, but it doesn’t confirm that backup data can be restored successfully.

References:
CompTIA Server+ (SK0-005) Exam Objectives, Domain 3.2 – “Given a scenario, implement appropriate data backup and restoration methods.”
CompTIA Server+ Study Guide (Exam SK0-005) – Section on Backup Validation and Restore Testing.
NIST SP 800-34 Rev. 1 – Contingency Planning Guide for Federal Information Systems, which emphasizes periodic restore testing to validate backup effectiveness.

A server technician has been asked to upload a few files from the internal web server to the internal FTP server. The technician logs in to the web server using PuTTY, but the connection to the FTP server fails. However, the FTP connection from the technician’s workstation is sucessful. To troubleshoot the issue, the technician executes the following command on both the web server and the workstation:

ping ftp.acme.local

The IP address in the command output is different on each machine. Which of the following is the MOST likely reason for the connection failure?

A. A misconfigured firewall

B. A misconfigured hosts.deny file

C. A misconfigured hosts file

D. A misconfigured hosts.allow file

C.   A misconfigured hosts file

Explanation

The key piece of information in the question is the result of the ping command:

"The IP address in the command output is different on each machine."

Name Resolution Priority:
When a machine attempts to connect to a service using a hostname (like ftp.acme.local), it must first resolve that name to an IP address. The standard order of resolution on most operating systems (configured via /etc/nsswitch.conf on Linux/Unix) is to check the hosts file first, and then proceed to a DNS server.

Hosts File Override:
The hosts file (/etc/hosts on Linux/Unix or C:\Windows\System32\drivers\etc\hosts on Windows) provides a static, local mapping of IP addresses to hostnames. Any entry in this file overrides what a DNS server might provide.

The Evidence:

The hostname (ftp.acme.local) is the same on both machines.
The resulting IP address from the ping (which uses the hostname) is different on the web server and the technician's workstation.
This indicates that one or both machines are not relying solely on the central DNS server for that hostname.
The web server's hosts file is likely configured with an incorrect, old, or internal-only IP address for ftp.acme.local, causing the connection to fail. The workstation's hosts file is either correct or empty, allowing it to successfully use the correct IP from DNS.

Why the other options are less likely:

A. A misconfigured firewall:
A firewall would block the connection (e.g., block port 21/FTP), but it would not cause the ping command to resolve the hostname to a different IP address on different client machines. The IP resolution happens before the firewall enforces the connection.

B. A misconfigured hosts.deny file / D. A misconfigured hosts.allow file:
These files are part of TCP Wrappers, a host-based access control system often used on Linux/Unix systems to control access to specific services (like FTP or SSH). They govern whether a connection is allowed or denied based on the client's IP address. They have absolutely no role in the process of hostname-to-IP-address resolution (which is done by the hosts file or DNS). The technician is troubleshooting a name resolution issue, not an access control issue.

Reference
This question aligns with Domain 4.0: Server Maintenance and Troubleshooting, specifically objective 4.4: Troubleshoot network connectivity issues. Understanding the name resolution hierarchy (hosts file $\rightarrow$ DNS) is a fundamental part of diagnosing network connectivity problems like this.

Network connectivity to a server was lost when it was pulled from the rack during maintenance. Which of the following should the server administrator use to prevent this situation in the future?

A. Cable management

B. Rail kits

C. A wireless connection

D. A power distribution unit

A.   Cable management

Explanation:

The root cause of the problem described is that network cables were accidentally pulled out or became disconnected when the server was physically moved ("pulled from the rack"). This indicates a failure in managing the physical network cables.

Direct Solution:
Proper cable management involves using techniques and tools (such as cable trays, velcro straps, and service loops) to secure and organize cables. This ensures that there is enough slack and that cables are routed in a way that prevents them from being snagged, strained, or disconnected when a server is slid in or out of the rack on its rails.

Why the other options are incorrect:

B. Rail kits:
Rail kits are essential for easily sliding a server in and out of a rack for maintenance. However, the scenario states the problem occurred during maintenance, implying the server was already properly mounted on rails. The rails allowed it to be pulled out, but the cable management failure caused the disconnection. Rails are a prerequisite for safe maintenance but do not solve the cable problem.

C. A wireless connection:
Replacing physical network cables with a wireless connection is not a standard, reliable, or secure solution for a rack-mounted server. Server connections typically require the high bandwidth, low latency, and stability of a wired connection. Wireless introduces unnecessary latency, potential interference, and security vulnerabilities.

D. A power distribution unit (PDU):
A PDU is used to distribute power to multiple devices in a rack. It has nothing to do with managing network connectivity or preventing network cables from being disconnected.

Reference
This question aligns with Domain 1.0: Server Hardware Installation and Management.
1.1: Given a scenario, install physical hardware.
1.5: Explain proper server maintenance techniques.

A key part of both installing and maintaining servers is implementing proper cable management. This is a fundamental best practice in a data center to ensure service availability, proper airflow for cooling, and ease of maintenance. The scenario described is a classic example of why cable management is so critical.

A technician is attempting to reboot a remote physical Linux server. However, attempts to command a shutdown -----now result in the loss of the SSH connection. The server still responds to pings. Which of the following should the technician use to command a remote shutdown?

A. virtual serial console

B. A KVM

C. An IDRAC

D. A crash cart

C.   An IDRAC

Explanation:

When a technician needs to remotely reboot or shut down a physical Linux server but loses SSH access after issuing the shutdown command, it indicates that the operating system is either unresponsive or has terminated network services before the system powers off. In this case, because the server still responds to pings, the hardware is functioning but remote OS-level access is unavailable.

The best solution is to use the Integrated Dell Remote Access Controller (iDRAC) or a similar out-of-band management interface (such as HP iLO or Lenovo XClarity). These tools provide hardware-level remote management, allowing administrators to control the power state of the server, access the system console, and perform reboots or shutdowns even when the operating system is not running or responsive.

iDRAC operates independently of the operating system and network interfaces used for SSH, enabling administrators to:

Power off, power on, or reboot the server remotely.
View the system console and monitor boot processes.
Access BIOS/UEFI settings and perform diagnostics.
This makes iDRAC the ideal tool for remote management and bare-metal control of servers.

Why the other options are incorrect:

A. Virtual serial console:
This provides console access via software but typically depends on OS-level functionality or hypervisor access, which may not be available during shutdown.

B. A KVM:
A Keyboard-Video-Mouse (KVM) switch allows console access but usually requires physical presence or an IP-KVM setup. Since the question specifies remote access, iDRAC is more appropriate.

D. A crash cart:
A crash cart is a physical setup (monitor, keyboard, mouse) used onsite for direct console access; it cannot be used remotely.

References:
CompTIA Server+ (SK0-005) Exam Objectives, Domain 1.4 – “Summarize server hardware installation and maintenance procedures.”
CompTIA Server+ Study Guide (Exam SK0-005) – Chapter on Out-of-Band Management Tools (iDRAC, iLO, IPMI).
Dell EMC iDRAC9 User Guide – Describes remote power control, console access, and system management independent of the host OS.

Which of the following encryption methodologies would MOST likely be used to ensure encrypted data cannot be retrieved if a device is stolen?

A. End-to-end encryption

B. Encryption in transit

C. Encryption at rest

D. Public key encryption

C.   Encryption at rest

Explanation

The objective is to prevent the retrieval of data if a device is stolen. This means protecting the data when it is stored on the device's hard drive or storage media, regardless of whether the device is powered on or off.

C. Encryption at rest:
This is the specific security methodology designed to protect data stored on physical storage media (hard drives, SSDs, tapes, etc.). The entire volume or specific files are encrypted using an encryption key. If the physical device is stolen, the unauthorized party cannot access or read the raw data on the drive without the key, effectively rendering the data useless. This directly addresses the scenario described in the question.

Why the other options are incorrect:

A. End-to-end encryption (E2EE):
E2EE is a communication concept that secures data from the moment it leaves the sender until it reaches the final recipient. While it is highly secure, its primary goal is to prevent interception during transmission (in transit), not to protect the data after it has been stored on a device and that device is physically stolen.

B. Encryption in transit (or in motion):
This is the process of protecting data while it is being sent over a network (e.g., using TLS/SSL, VPNs, or IPsec). If a device is stolen, the data is no longer in transit; it is at rest. This measure does not protect the stored data on the local drive.

D. Public key encryption (or Asymmetric encryption):
This is an algorithm or mechanism (using a pair of mathematically linked keys: one public, one private) used to achieve encryption, but it is not the methodology or state of the data being protected. Both Encryption in transit (like TLS) and Encryption at rest (like BitLocker or LUKS) use various forms of public key or symmetric key encryption algorithms. The methodology being asked for relates to the state of the data.

Reference
This question relates to Domain 2.0: Server Security, specifically objective 2.1: Summarize server hardening techniques. This includes differentiating between the various states of data protection, such as data at rest (storage encryption) and data in transit (network encryption).

A server administrator is installing an OS on a new server. Company policy states no one is to log in directly to the server. Which of the following Installation methods is BEST suited to meet the company policy?

A. GUI

B. Core

C. Virtualized

D. Clone

B.   Core

Explanation

The objective is to install an Operating System (OS) in a manner that enforces the policy: no one is to log in directly to the server.

Core Installation:
This option (such as Windows Server Core or a minimal/headless installation in Linux) installs only the essential OS components, services, and the command-line interface. Crucially, it excludes the Graphical User Interface (GUI) shell and all related desktop applications.

Security and Enforcement:
Without a local desktop environment, the administrator is forced to perform configuration, management, and maintenance remotely using secure, auditable, and automated tools like PowerShell Remoting, Secure Shell (SSH), or Remote Server Administration Tools (RSAT). This setup effectively prevents unauthorized "direct" or local console logins and minimizes the system's attack surface.

Why the other options are incorrect:

A. GUI (Graphical User Interface):
This provides a full desktop experience, making it easy and convenient for anyone with physical access or a console connection to log in directly, violating the policy.

C. Virtualized:
This refers to the server's environment (it's a Virtual Machine), not the specific installation type of the OS running inside it. A virtual server can still have a GUI installed.

D. Clone:
This is a deployment method (creating a server from a standardized image), not a type of OS installation. A cloned image could still contain a full GUI.

Reference
This question relates to Domain 1.0: Server Administration, specifically objective 1.2: Given a scenario, install server operating systems. This objective requires the administrator to understand different installation types (e.g., full vs. minimal) and select the one that best aligns with security and management policies, such as the use of Server Core for minimized interaction and greater security.

A large number of connections to port 80 is discovered while reviewing the log files on a server. The server is not functioning as a web server. Which of the following represent the BEST immediate actions to prevent unauthorized server access?
(Choose two.)

A. Audit all group privileges and permissions

B. Run a checksum tool against all the files on the server

C. Stop all unneeded services and block the ports on the firewall

D. Initialize a port scan on the server to identify open ports

E. Enable port forwarding on port 80

F. Install a NIDS on the server to prevent network intrusions

C.   Stop all unneeded services and block the ports on the firewall
D.   Initialize a port scan on the server to identify open ports

Explanation:

The scenario describes a potential security incident. A non-web server should not have a large number of connections to port 80 (HTTP). This indicates either an unauthorized service is running or a malware infection is beaconing out. The key words are "BEST immediate actions."

C. Stop all unneeded services and block the port on the firewall:
This is a direct and immediate containment action.
Stopping the service on the server itself stops the unauthorized activity at the source.
Blocking the port on the firewall provides a network-level control to prevent any external connections from reaching port 80 on this server, effectively quarantining it. This is a defense-in-depth measure.

D. Initialize a port scan on the server to identify open ports:
This is an immediate reconnaissance action to understand the full scope of the problem. If an attacker or malware has opened port 80, they may have opened other ports for backdoor access. A port scan will reveal all other unauthorized or unexpected listening services that need to be addressed.
Together, these two actions form a rapid "contain and assess" strategy to prevent further unauthorized access.

Why the other options are incorrect:

A. Audit all group privileges and permissions:
This is an important security task, but it is a long-term, forensic, or remediation step—not an immediate action to stop an active incident. The immediate threat is the open network port, not necessarily privilege escalation (yet).

B. Run a checksum tool against all the files on the server:
This is used for integrity checking and detecting file tampering. While valuable for a forensic investigation after the incident is contained, it is a time-consuming process that does not immediately stop the unauthorized network access.

E. Enable port forwarding on port 80:
This would make the situation drastically worse. Port forwarding would redirect more traffic to the compromised service, increasing the attack surface and potentially granting the attacker greater access.

F. Install a NIDS on the server to prevent network intrusions:
A Network Intrusion Detection System (NIDS) is typically a network-level appliance, not something installed on a single server. A Host-based IDS (HIDS) would be the correct term for a server-based agent. More importantly, installing and configuring an IDS is a proactive, long-term security measure, not an immediate response to an active incident. The unauthorized access is already happening; you need to stop it now, not just detect it.

Reference
This question falls under Domain 5.0: Security.
5.3: Given a scenario, apply server hardening methods.
5.6: Given a scenario, apply the appropriate server security controls.

The core principles tested here are part of incident response and hardening. The immediate steps are to contain the breach (stop service/block port) and assess the damage (scan for other openings), which aligns with standard security best practices.

An administrator is configuring a server to communicate with a new storage array. To do so, the administrator enters the WWPN of the new array in the server’s storage configuration. Which of the following technologies is the new connection using?

A. iSCSI

B. eSATA

C. NFS

D. FcoE

D.   FcoE

Explanation

The key identifier in this question is the WWPN (World Wide Port Name).
WWPN is a unique 64-bit identifier assigned to each port in a Fibre Channel network. It is the fundamental addressing mechanism used for communication in Fibre Channel Storage Area Networks (SANs).
FCoE (Fibre Channel over Ethernet) is a technology that encapsulates Fibre Channel frames inside Ethernet frames. This allows Fibre Channel communications to run over high-speed Ethernet networks (specifically, Converged Enhanced Ethernet - CEE).
Crucially, FCoE retains the Fibre Channel addressing scheme. This means that even though the physical cable might be a standard Ethernet cable, the configuration still requires the use of WWPNs to identify the initiator (server) and target (storage array).
Because the administrator is configuring the server using a WWPN, the technology must be based on the Fibre Channel protocol, and FCoE is the only Fibre Channel-based option listed.

Why the other options are incorrect:

A. iSCSI:
iSCSI uses standard Ethernet networks and TCP/IP for storage traffic. Instead of WWPNs, it uses IQNs (iSCSI Qualified Names) or EUIs (Extended Unique Identifiers) for addressing. You would not enter a WWPN to configure an iSCSI connection.

B. eSATA:
eSATA is a simple external connector for SATA drives, essentially an extension of the internal SATA bus. It does not use any form of network addressing like a WWPN. It is a direct-attach technology.

C. NFS:
NFS (Network File System) is a file-sharing protocol that runs over an IP network (like iSCSI). It uses standard IP addresses and mount points for configuration, not WWPNs. It operates at the file level, not the block level where WWPN addressing is used.

Reference
This question aligns with Domain 2.0: Networking.
2.1: Given a scenario, configure and deploy common server topologies.
This includes understanding different storage network topologies and their configuration requirements, specifically differentiating between Fibre Channel/FCoE (which uses WWPN) and iSCSI (which uses IQN).

In summary, the use of a WWPN is the definitive indicator of a Fibre Channel-based technology, making FCoE the only possible correct answer.

Users cannot access a new server by name, but the server does respond to a ping request using its IP address. All the user workstations receive their IP information from a DHCP server. Which of the following would be the best step to perform NEXT?

A. Run the tracert command from a workstation.

B. Examine the DNS to see if the new server record exists.

C. Correct the missing DHCP scope.

D. Update the workstation hosts file.

B.   Examine the DNS to see if the new server record exists.

Explanation:

If users can successfully ping the server by its IP address but cannot access it by name, this indicates that network connectivity is working correctly, but name resolution is failing. Since the workstations receive their IP configuration from DHCP, they most likely rely on DNS for hostname resolution.
The most logical next step is to check the DNS server to verify whether a record (A or AAAA record) exists for the new server. If the DNS record is missing or incorrect, clients will not be able to resolve the hostname to an IP address. Once the correct record is added or fixed, users should be able to access the server by name without issues.
This step directly addresses the root cause of the problem — a missing or misconfigured DNS entry — which is the most common reason for this specific symptom.

Why the other options are incorrect:

A. Run the tracert command from a workstation:
Traceroute checks the routing path to the IP address, which is unnecessary since the server already responds to ping, confirming network connectivity.

C. Correct the missing DHCP scope:
DHCP scope problems affect IP assignment. Since users can already ping the server, their IP configuration is functional, and DHCP is not the issue.

D. Update the workstation hosts file:
While adding the entry to the hosts file might temporarily fix the problem, it is not a scalable or proper network solution. DNS should handle name resolution centrally.

References:
CompTIA Server+ (SK0-005) Exam Objectives, Domain 2.2 – “Given a scenario, configure network services and networking features.”
CompTIA Server+ Study Guide (Exam SK0-005) – Chapter on DNS, DHCP, and Name Resolution Troubleshooting.
Microsoft DNS Documentation – Verifies that missing or outdated DNS records are a primary cause of hostname resolution failures.

A server in a remote datacenter is no longer responsive. Which of the following is the BEST solution to investigate this failure?

A. Remote desktop

B. Access via a crash cart

C. Out-of-band management

D. A Secure Shell connection

C.   Out-of-band management

Explanation:

The scenario describes a server that is "no longer responsive." This typically means the primary operating system has crashed, locked up, or is otherwise inaccessible through standard in-band management tools like RDP or SSH. The server is in a remote datacenter, making physical access difficult or slow.
Out-of-band (OOB) management refers to using a dedicated, separate channel to manage the server's hardware directly, independent of the main operating system's state. This is typically achieved through a dedicated network interface on the server's motherboard, such as an iDRAC (Dell), iLO (HPE), or IPMI (vendor-agnostic).

Purpose:
OOB management allows an administrator to perform actions as if they were physically at the server, including:
Viewing the server's console output.
Accessing the BIOS/UEFI settings.
Power cycling the server.
Mounting a remote virtual media (ISO) to reinstall the OS.
This is the BEST solution because it provides immediate, remote access to investigate the failure without requiring a technician to travel to the datacenter.

Why the other options are incorrect:

A. Remote desktop & D. A Secure Shell connection:
These are both in-band management tools. They require the server's primary operating system to be running and the network stack to be functional. Since the server is unresponsive, these connections will fail and are useless for initial investigation.

B. Access via a crash cart:
A crash cart (a mobile station with a keyboard, monitor, and mouse) is the traditional method for direct physical access. While this would work to investigate the failure, it requires a technician to be physically present at the remote datacenter. This is often time-consuming, expensive, and defeats the purpose of having remote management capabilities. It is a last resort, not the best immediate solution.

Reference
This question falls under Domain 3.0: Server Maintenance.
3.1: Given a scenario, perform server hardware maintenance and management.
3.2: Explain the purpose of the following in a server environment:... Out-of-band management.

The key distinction tested here is between in-band (through the OS) and out-of-band (independent of the OS) management. For an unresponsive server, OOB management is the clear and definitive best practice for remote investigation.

Page 13 out of 50 Pages
SK0-005 Practice Test Previous