CompTIA SK0-005 Practice Test
Prepare smarter and boost your chances of success with our CompTIA SK0-005 Practice test. This test helps you assess your knowledge, pinpoint strengths, and target areas for improvement. Surveys and user data from multiple platforms show that individuals who use SK0-005 practice exam are 40–50% more likely to pass on their first attempt.
Start practicing today and take the fast track to becoming CompTIA SK0-005 certified.
14930 already prepared
Updated On : 3-Nov-2025493 Questions
4.8/5.0
Topic 1: Exam Set A
A technician needs to deploy an operating system that would optimize server resources.
Which of the following server installation methods would BEST meet this requirement?
A. Full
B. Bare metal
C. Core
D. GUI
Which of the following can be used to map a network drive to a user profile?
A. System service
B. Network service
C. Login script
D. Kickstart script
Explanation:
In enterprise Windows environments, login scripts are the established and supported method for automatically mapping network drives to a user’s profile during logon. These scripts execute in the user’s security context immediately after authentication, allowing administrators to dynamically assign drive letters based on user identity, group membership, departmental OU, or other Active Directory attributes.
Login scripts are typically written in batch (.bat), PowerShell (.ps1), or legacy VBScript (.vbs) and can be deployed via:
Group Policy Objects (GPO):
User Configuration → Policies → Windows Settings → Scripts (Logon/Logoff) → Logon
User Account Properties in Active Directory Users and Computers:
Profile tab → Logon script field
Practical Example (Batch Login Script):
batch@echo off
:: Map departmental shared drive if user is in Sales group
net use S: \\CorpFS01\Sales /persistent:yes
:: Map personal home drive using username variable
net use H: \\CorpFS02\Users\%username% /persistent:yes
:: Conditional mapping using IFMEMBER (legacy tool)
IFMEMBER "Finance Users"
net use F: \\FinanceFS\SecureData /persistent:yes
This ensures consistent, repeatable drive mappings across all domain-joined workstations without manual intervention.
Why the other options are incorrect:
A. System service
System services run under privileged accounts like Local System, Local Service, or Network Service and start at boot time—before any user logs in. They operate outside the interactive user session and cannot access or modify the user’s desktop environment, including drive mappings. Attempting to map a drive via a service would make it available only to the service account, not the logged-on user.
B. Network service
This is a predefined low-privilege service account in Windows, not a tool or mechanism for configuration. It is used by services (e.g., IIS, SQL Server) to access network resources securely. It has no capability to execute user logon actions or map drives to user profiles.
D. Kickstart script
A Kickstart script (.ks file) is part of the Anaconda installer used in Red Hat Enterprise Linux (RHEL), CentOS, and Fedora for automated OS deployment. It defines partitioning, package selection, and post-install tasks during system provisioning—not user logon processes. It is completely unrelated to Windows networking or drive mapping.
Reference:
CompTIA Server+ SK0-005 Objectives – 2.3 “Given a scenario, configure user access to server resources” (login scripts).
Microsoft Docs – “Assign logon scripts to users or groups” (Group Policy > User Configuration > Policies > Windows Settings > Scripts > Logon).
A data center employee shows a driver's license to enter the facility Once the employee enters, the door immediately doses and locks, triggering a scale that then weighs the employee before granting access to another locked door. This is an example of.
A. mantrap.
B. a bollard
C. geofencing
D. RFID.
Explanation
A mantrap, also known as an access control vestibule, is a physical security mechanism designed to control the flow of people into a secure area one at a time. It typically consists of two interlocking doors. The second door will not open until the first door has closed and locked, and the individual's credentials have been verified.
Let's break down the scenario using the description of a mantrap:
First Authentication & Entry:
The employee shows a driver's license (a form of identification) to enter the first door.
Containment: The door immediately closes and locks behind them. The employee is now in a small, secure holding area.
Secondary Verification:
A scale is triggered to weigh the employee. This is a form of biometric authentication (something you are), ensuring that the person who entered is the same person who is supposed to be there (preventing "tailgating," where an unauthorized person follows an authorized one).
Final Access:
Only after both the initial credential check and the secondary biometric verification are successful is the second locked door opened, granting access to the data center.
This multi-step process within a confined space is the definitive characteristic of a mantrap.
Why the Other Options Are Incorrect
B. A Bollard:
A bollard is a short, sturdy vertical post designed to create a physical barrier and prevent vehicle access to a protected area (e.g., in front of a building entrance). It does not control individual pedestrian access in the manner described.
C. Geofencing:
Geofencing is a virtual perimeter for a real-world geographic area. It uses technologies like GPS or RFID to trigger an action (like a mobile alert) when a device enters or leaves the area. It is a logical control, not the physical, door-based control system described in the question.
D. RFID:
RFID (Radio-Frequency Identification) is a technology used for identification, such as in access cards or badges. While an RFID reader could have been used instead of the driver's license check at the first door, RFID itself is just the technology, not the entire security system. The scenario describes the physical process and structure (the two-door vestibule with intermediate verification), which is the definition of a mantrap.
Reference
This question falls under Domain 5.0: Security and specifically addresses objective 5.2 Given a scenario, apply physical security methods and concepts. Understanding different physical security controls like mantraps, bollards, and badge readers is a critical component of the Server+ certification.
A systems administrator has noticed performance degradation on a company file server, and one of the disks on it has a solid amber light. The administrator logs on to the disk utility and sees the array is rebuild ing. Which of the following should the administrator do NEXT once the rebuild is finished?
A. Restore the server from a snapshot.
B. Restore the server from backup.
C. Swap the drive and initialize the disk.
D. Swap the drive and initialize the array.
Explanation:
When a solid amber light appears on a hard drive in a server, it usually signals that the drive has failed or is in a predictive failure state. In this case, the system administrator noticed that the RAID array is rebuilding, which indicates that the RAID controller has detected the failed drive and is actively rebuilding data using redundant information from parity or mirrored disks. This process helps restore data integrity by redistributing data to a hot spare or the remaining healthy drives in the array.
Once the rebuild process completes successfully, it means the array is back to a healthy or optimal state, but the originally failed drive is still bad and must be physically replaced.
The next logical step is to:
Physically remove (swap) the failed drive that is showing a solid amber light.
Install a new replacement drive of the same capacity and specifications.
Initialize the new disk within the RAID management utility so it can be used either as a hot spare or as a replacement member of the RAID array to restore full redundancy for future protection.
You should not restore from a backup (Option B) or a snapshot (Option A), since the RAID rebuild already recovered the data and no data loss has occurred. Restoring would waste time and could potentially overwrite valid data. Similarly, initializing the entire array (Option D) would erase all the data, which is unnecessary and counterproductive.
Reference:
CompTIA Server+ SK0-005 Exam Objectives:
2.2 – Given a scenario, install, configure, and maintain server components.
2.3 – Given a scenario, perform server storage configuration and maintenance tasks.
Vendor Documentation Examples:
Dell PowerEdge RAID Controller (PERC) User Guide – After a rebuild completes, replace the failed drive and reconfigure it as a global or dedicated hot spare.
HPE Smart Array Controller Manual – Once an array rebuild finishes, replace any failed physical drive to maintain redundancy.
Summary:
After the RAID rebuild is complete, the administrator must replace the failed drive and initialize the new disk to ensure ongoing redundancy and fault tolerance.
Hence, the correct answer is C. Swap the drive and initialize the disk.
An administrator is configuring a server that will host a high-performance financial application. Which of the following disk types will serve this purpose?
A. SAS SSD
B. SATA SSD
C. SAS drive with 10000rpm
D. SATA drive with 15000rpm
Explanation
The requirement is for a server that will host a high-performance financial application. This type of application demands the absolute best in terms of I/O throughput (data transfer rate) and low latency (quick response time) for transactional operations.
SSD (Solid State Drive) vs. HDD (Hard Disk Drive):
SSDs use flash memory chips and have no moving parts. This gives them vastly superior I/O Operations Per Second (IOPS) and lower latency compared to traditional HDDs (options C and D), which use spinning platters. For high-performance applications, SSDs are the clear choice. This immediately rules out options C and D.
SAS (Serial Attached SCSI) vs. SATA (Serial ATA):
SATA is a common interface designed primarily for consumer desktop and lower-end server applications.
SAS is a more robust, high-performance interface specifically designed for enterprise-level server environments that require:
Higher speeds and greater bandwidth.
Full-duplex signaling (data can be sent and received simultaneously).
Better scalability (supports more devices).
Higher reliability and enterprise features (like dual-porting for redundancy and failover).
Conclusion:
SAS SSD (Option A) combines the high speed and low latency of SSD technology with the reliability, performance, and enterprise-grade features of the SAS interface. This makes it the ideal choice for a high-performance financial application where every millisecond and every transaction counts.
SATA SSD (Option B) is a good option, but it lacks the enterprise-level performance and features of SAS, making it a secondary choice for a truly high-performance, mission-critical server.
Reference
This question relates to the Storage domain of the SK0-005 exam, specifically focusing on the different media types (SSD vs. HDD) and interfaces (SAS vs. SATA) used in enterprise server environments and their impact on performance.
A company’s IDS has identified outbound traffic from one of the web servers coming over port 389 to an outside address. This server only hosts websites. The company’s SOC administrator has asked a technician to harden this server. Which of the following would be the BEST way to complete this request?
A. Disable port 389 on the server
B. Move traffic from port 389 to port 443
C. Move traffic from port 389 to port 637
D. Enable port 389 for web traffic
Explanation:
Port 389 is the default port for LDAP (Lightweight Directory Access Protocol) in cleartext. A web server that only hosts websites has no legitimate business need to initiate outbound LDAP connections to external addresses. The IDS alert indicates potential compromise—such as malware, a backdoor, or an attacker attempting directory enumerationily services exfiltration or lateral movement via LDAP.
Hardening the server means eliminating unnecessary services and open ports. Since the server’s sole function is web hosting (typically HTTP/HTTPS on ports 80/443), disabling port 389 entirely is the most secure and appropriate action.
Steps to implement:
Check for running services using netstat -ano | find ":389" or ss -tuln | find ":389".
Identify and stop any process binding to port 389 (likely malicious or misconfigured).
Disable at the firewall (Windows Firewall, iptables, or network ACL):
powershell# Windows Firewall example
netsh advfirewall firewall add rule name="Block Outbound LDAP" dir=out action=block protocol=TCP localport=389
Disable LDAP client/service if present (not needed on a web-only server).
Scan for malware using endpoint protection or tools like Microsoft Defender or CrowdStrike.
Why the other options are incorrect:
B. Move traffic from port 389 to port 443
Port 443 is for HTTPS. Redirecting LDAP traffic to 443 does not secure it—it just changes the port (a technique called port masquerading often used by attackers). LDAP over 443 is non-standard, breaks functionality, and does nothing to eliminate the risk. This is security through obscurity, not hardening.
C. Move traffic from port 389 to port 637
Port 637 is typically used by Redis (an in-memory data store). Redirecting LDAP to Redis port makes no technical sense, breaks both protocols, and introduces new risks. This is not a valid hardening method.
D. Enable port 389 for web traffic
Port 389 is not used for web traffic (which uses 80/443). Enabling it “for web” is meaningless and increases attack surface. LDAP and HTTP are entirely different protocols.
Reference:
CompTIA Server+ SK0-005 Exam Objectives – Domain 3.0 Security, Objective 3.3:
“Given a scenario, apply server hardening techniques including… disabling unnecessary services and ports…”
NIST SP 800-123 – Guide to General Server Security:
“Close all ports not required for the server’s function.”
IANA Service Name and Transport Protocol Port Number Registry:
Port 389/TCP = ldap
Port 443/TCP = https
Port 637/TCP = redis (informal)
Microsoft Learn – Best Practices for Securing Active Directory:
Recommends restricting LDAP to authorized internal systems only.
A technician is working on a Linux server. The customer has reported that files in the home
directory are missing. The /etc/ f stab file has the following entry:
nfsserver:/home /home nfs defaults 0 0
However, a df -h /home command returns the following information:
/dev/sda2 10G 1G 9G 10% /home
Which of the following should the technician attempt FIRST to resolve the issue?
A. mkdir /home
B. umount nfsserver:/home
C. rmdir nfsserver:/home/dev/sda2
D. mount /home
Explanation:
Let’s break down what’s happening step-by-step in this Linux storage scenario:
Scenario Overview:
The /etc/fstab entry:
nfsserver:/home /home nfs defaults 0 0
This line tells the system to mount the /home directory from the remote nfsserver using NFS.
However, when the technician runs:
df -h /home
It returns:
/dev/sda2 10G 1G 9G 10% /home
This output means the local partition (/dev/sda2) is mounted at /home, not the NFS share.
Therefore, users are seeing the local (empty) directory structure instead of their files on the NFS server.
Diagnosis:
Since /dev/sda2 is currently mounted at /home, it’s hiding the mount point for the NFS share.
The NFS share is not currently mounted, even though it’s listed in /etc/fstab.
Next Step – What to Do First:
The first step to restore the proper access is to mount the NFS share manually using:
mount /home
This command tells the system to look up the corresponding entry in /etc/fstab and mount the remote share (nfsserver:/home) onto /home.
Once this is done, /home will point to the NFS server again, and users’ files will reappear.
Why the Other Options Are Wrong:
A. mkdir /home
The /home directory already exists; recreating it is unnecessary and would not fix the mount issue.
B. umount nfsserver:/home
The NFS share is not currently mounted, so unmounting it won’t accomplish anything.
C. rmdir nfsserver:/home/dev/sda2
/dev/sda2 is a block device, not a directory. Trying to remove it makes no sense.
Reference:
CompTIA Server+ SK0-005 Exam Objective:
2.4 – Given a scenario, perform server operating system installation, configuration, and maintenance tasks.
Linux Documentation:
man fstab — Configuration file for static filesystem mounts.
man mount — Command for mounting filesystems manually.
Red Hat Enterprise Linux Admin Guide:
NFS shares may need to be remounted if they fail or are overshadowed by local mounts.
Summary:
The issue is that the NFS share /home from nfsserver is not currently mounted. The technician should mount the /home directory to reestablish access to user files.
A server administrator has connected a new server to the network. During testing, the administrator discovers the server is not reachable via server but can be accessed by IP address. Which of the following steps should the server administrator take NEXT? (Select TWO).
A. Check the default gateway.
B. Check the route tables.
C. Check the hosts file.
D. Check the DNS server.
E. Run the ping command.
F. Run the tracert command
D. Check the DNS server.
Explanation
The core problem is:
The server is reachable by IP address (e.g., 192.168.1.10).
The server is NOT reachable by server name (e.g., APP-SERVER01).
This scenario immediately indicates a name resolution issue. The network connectivity is working (proven by the successful IP access), but the mechanism that translates the human-friendly name into the machine-readable IP address is failing.
The two primary mechanisms for name resolution on a network are the local hosts file and the DNS (Domain Name System) server.
D. Check the DNS server:
The DNS server is the centralized, authoritative system that translates domain names and hostnames into IP addresses across the network. If the server's name-to-IP mapping (its "A" record) is missing, incorrect, or if the server's network configuration is pointing to a non-functional DNS server, name resolution will fail. Checking the DNS server's records and its reachability is the most critical enterprise-level step.
C. Check the hosts file:
The hosts file is a static, local file on the server (or client machine) that can manually map hostnames to IP addresses. While less common in large enterprise environments, it can override DNS and is often checked first on a new server to ensure no static, temporary, or incorrect entries were added during setup that are interfering with network resolution.
Why the other options are incorrect:
A. Check the default gateway:
This relates to communication outside of the local subnet. Since the server is reachable by IP address (which means its local IP stack and configuration are functional), the gateway is unlikely to be the cause of a name resolution failure.
B. Check the route tables:
This is related to IP routing paths, not name-to-IP translation. The successful IP access proves the basic routing is fine.
E. Run the ping command / F. Run the tracert command:
These are troubleshooting tools used after identifying the potential issue. You would use ping to test the reachability of the DNS server itself, or you might use the nslookup or dig commands specifically to test name resolution, but the steps to take to fix the issue involve checking and correcting the DNS server and hosts file data.
Which of the following policies would be BEST to deter a brute-force login attack?
A. Password complexity
B. Password reuse
C. Account age threshold
D. Account lockout threshold
Explanation:
A brute-force login attack involves an attacker repeatedly attempting username/password combinations until access is gained. The most direct and effective countermeasure is an account lockout policy that temporarily disables the account after a defined number of failed login attempts (e.g., lock after 5 failed attempts for 15 minutes).
This policy limits the attacker's ability to continue guessing, rendering brute-force attacks impractical within a reasonable time frame. Even with weak passwords, the lockout forces delays or requires targeting multiple accounts.
Example Configuration (via Local Security Policy or Group Policy):
Account lockout threshold: 5 invalid logon attempts
Account lockout duration: 15 minutes
Reset account lockout counter after: 15 minutes
This is a standard industry best practice and directly aligns with the goal of deterring brute-force attacks.
Why the other options are incorrect or less effective:
A. Password complexity
Password complexity (e.g., requiring uppercase, numbers, symbols) increases the search space and makes passwords harder to guess. However, it does not stop an attacker from continuing to try thousands of complex combinations. It slows brute-force but does not deter it as effectively as lockout.
B. Password reuse
This refers to policies that prevent reusing old passwords. It has no impact on brute-force attacks, which target current credentials. In fact, a password reuse policy is a preventive control against credential stuffing, not brute-force.
C. Account age threshold
This is not a standard security term. It might imply restricting logins based on account creation date (e.g., new accounts can't log in immediately), but this is not a recognized control in Windows, Linux, or CompTIA objectives. It does nothing to stop repeated login attempts.
Reference:
CompTIA Server+ SK0-005 Exam Objectives – Domain 3.0 Security, Objective 3.1:
“Compare and contrast security policies and procedures including… account lockout…”
NIST SP 800-63B (Digital Identity Guidelines) – Section 5.2.2:
“Verifiers SHOULD implement account lockout after a number of failed attempts to mitigate brute-force attacks.”
Microsoft Best Practices – Account Lockout Policy:
Account lockout threshold
OWASP Authentication Cheat Sheet:
“Implement account lockout after 5–10 failed attempts.”
An administrator needs to disable root login over SSH. Which of the following tiles should be edited to complete this task?
A. /root.ssh/sshd/config
B. /etc.ssh/sshd_config
C. /root/.ssh/ssh_config
D. /etc.sshs_shd_config
Explanation
The Secure Shell (SSH) service has two main configuration files:
one for the server and one for the client.
Server Configuration File (sshd_config):
This file controls the behavior of the SSH daemon (sshd), which is the service that accepts incoming SSH connections. Settings here dictate how the server will respond to connection requests, including security rules like which users can log in and which authentication methods are allowed.
Client Configuration File (ssh_config):
This file controls the behavior of the SSH client (ssh), which is the command used to initiate outgoing connections to other servers.
The task is to "disable root login over SSH." This is a rule that must be enforced by the SSH server itself. Therefore, the correct file to edit is the server configuration file.
The standard location for the SSH server configuration file on most Linux systems is /etc/ssh/sshd_config.
To disable root login, the administrator would open this file and look for the PermitRootLogin directive, ensuring it is set to no:
PermitRootLogin no
After making this change, the SSH service must be restarted for the new configuration to take effect (e.g., with systemctl restart sshd).
Why the Other Options Are Incorrect
A. /root/.ssh/sshd/config:
This path is incorrect and non-standard. The /root/.ssh/ directory is for the root user's client-side configuration (like keys and a personal config file). The server configuration is never stored in a user's home directory.
C. /root/.ssh/ssh_config:
This is the root user's specific client configuration file. Changes here would only affect outgoing SSH connections made by the root user, not incoming connections to the server.
D. /etc/sshs_shd_config:
This filename is a misspelling and does not exist on a standard system. The correct filename is sshd_config.
Reference
This question falls under Domain 5.0: Security, specifically addressing:
5.1: Explain the importance of physical security concepts.
5.3: Explain the importance of logical security concepts.
While this is a logical security control, it directly supports physical security by restricting console-level access to the most powerful account over the network. Disabling root login over SSH is a fundamental server hardening practice. It forces administrators to log in with a standard user account and then escalate privileges (e.g., using sudo), which creates an audit trail and adds a layer of security.
Key Takeaway:
Always modify the /etc/ssh/sshd_config file to change the behavior of the SSH server.
| Page 6 out of 50 Pages |
| SK0-005 Practice Test | Previous |