Topic 1: Exam Set A
A systems administrator is setting up a new server that will be used as a DHCP server. The administrator installs the OS but is then unable to log on using Active Directory credentials.The administrator logs on using the local administrator account and verifies the server has the correct IP address, subnet mask, and default gateway. The administrator then gets on another server and can ping the new server. Which of the following is causing the issue?
A. Port 443 is not open on the firewall
B. The server is experiencing a downstream failure
C. The local hosts file is blank
D. The server is not joined to the domain
Explanation
The server is intended to function as a DHCP server, but the immediate issue is:
Cannot log on using Active Directory credentials
Even though:
IP, subnet mask, and default gateway are correct.
The server is pingable from another system.
Local administrator login works.
This means network connectivity is fine, but Active Directory authentication fails.
Root Cause:
Active Directory logon requires:
The server to be joined to the domain.
Valid domain credentials.
Access to a domain controller (via Kerberos/LDAP over ports 389, 445, etc.).
Since local login works but domain login fails, and networking is verified, the most likely cause is that the server has not been joined to the Active Directory domain.
Why the other options are incorrect:
A. Port 443 is not open on the firewall
Port 443 is for HTTPS. AD authentication uses LDAP (389), Kerberos (88), SMB (445), etc.
Also, ping works → basic connectivity is confirmed. Port 443 is irrelevant here.
B. The server is experiencing a downstream failure
Vague and unsupported. Ping succeeds, and local login works → no evidence of hardware or network failure.
C. The local hosts file is blank
The hosts file is for static name resolution.
AD logon uses DNS, not hosts.
If DNS failed, ping by IP still works (which it does).
A blank hosts file is normal and expected.
D. The server is not joined to the domain (Correct)
Until a server is domain-joined, it cannot authenticate AD users.
Local accounts work because they are stored locally.
This is a prerequisite step before using domain credentials.
Reference
This question falls under CompTIA Server+ Domain 4.0: Server Administration, specifically covering the initial setup and configuration of servers, including integrating them into a directory services environment. The fundamental concept tested is that a computer must be a member of an Active Directory domain before it can authenticate users from that domain. This is a standard procedure after installing the OS on a server that is intended to be part of a Windows domain network.
A company deploys antivirus, anti-malware, and firewalls that can be assumed to be functioning properly. Which of the following is the MOST likely system vulnerability?
A. Insider threat
B. Worms
C. Ransomware
D. Open ports
E. Two-person integrity
Explanation
This question is about identifying a vulnerability that exists despite having strong technical security controls in place.
Antivirus/Anti-malware protect against malicious software.
Firewalls control and filter network traffic based on rules.
These are all technical defenses designed to stop external attacks and known malware. However, they are largely ineffective against a trusted insider who already has legitimate access to the system.
An insider threat is a person within the organization (e.g., an employee, contractor) who may misuse their authorized access to steal data, sabotage systems, or install malware that bypasses the initial defenses. Because they are acting from inside the network perimeter, many of the technical controls are either not designed to stop them or can be circumvented using their legitimate credentials.
Why the other options are incorrect:
B. Worms & C. Ransomware:
These are specific types of malware. The deployed antivirus and anti-malware software are the primary technical controls designed specifically to prevent, detect, and remove these threats. Since we are to assume these controls are "functioning properly," worms and ransomware are a much lower likelihood.
D. Open ports:
A firewall's primary job is to manage open, closed, and filtered ports. A properly functioning firewall would only have necessary ports open and would be monitoring them, mitigating this as a primary vulnerability.
E. Two-person integrity:
This is not a vulnerability; it is a security control. It is a procedural defense (a type of administrative control) where two people are required to complete a critical task, which is specifically used to mitigate the risk of an insider threat.
Reference
This question aligns with CompTIA Server+ Domain 2.0: Security and tests the understanding of the different types of threats and the limitations of security controls. It highlights a key principle in security: technological controls alone are insufficient.
A holistic security posture requires a combination of:
Technical Controls (Firewalls, AV)
Administrative Controls (Policies, training, two-person integrity)
Physical Controls
The "insider threat" is a classic example of a risk that must be addressed through administrative controls (like strict access policies, user monitoring, and two-person integrity) and user training, as technical controls are a weak defense against it.
A server administrator is using remote access to update a server. The administrator notices numerous error messages when using YUM to update the applications on a server. Which of the following should the administrator check FIRST?
A. Network connectivity on the server
B. LVM status on the server
C. Disk space in the /var directory
D. YUM dependencies
Explanation
When using YUM (Yellowdog Updater, Modified) on a Linux system, it performs several key operations that are heavily dependent on disk space:
Downloads Packages:
YUM downloads RPM package files from repositories.
Stores Cache:
By default, YUM stores these downloaded packages and metadata in the /var/cache/yum directory.
Requires Temporary Space:
The installation process itself requires temporary space to extract and manage files during the update.
The /var directory is the standard location for variable data like logs, caches, and in this case, YUM's working files. If there is insufficient disk space in /var, YUM will be unable to download new packages or complete the installation process, resulting in numerous error messages.
Checking disk space is a quick, simple, and fundamental first step in the troubleshooting process for this kind of error.
Why the other options are incorrect:
A. Network connectivity on the server:
While network connectivity is essential for reaching the repositories, the problem description states the administrator is using remote access to the server. If network connectivity were completely down, the remote session itself would likely be disconnected or unusable. Furthermore, YUM errors due to network issues are typically connection timeouts or "cannot reach repository" messages, not "numerous" varied errors during the update process itself.
B. LVM status on the server:
Logical Volume Manager (LVM) status is important for overall storage health, but it is a lower-level component. If there were an LVM failure, it would likely cause much broader system instability and filesystem errors, not just isolated YUM update failures. Checking this is not the first step when a single application (YUM) is failing.
D. YUM dependencies:
Dependency resolution is a core function of YUM. If there were dependency issues, YUM would typically provide clear messages about missing or conflicting packages. While this is a potential cause, it is less common than simply running out of disk space, which is a very frequent cause of update failures. You would check the specific error messages for dependencies after ruling out basic system resources like disk space.
Reference
This question falls under CompTIA Server+ Domain 3.0: Server Maintenance, which includes performing patch management and troubleshooting standard update procedures. A server administrator must know the common failure points for package managers like YUM. A standard best practice is to ensure adequate free space in /var and to regularly clean the YUM cache (yum clean all) before performing major updates.
An administrator is deploying a new secure web server. The only administration method
that is permitted is to connect via RDP. Which of the following
ports should be allowed?
(Select TWO).
A. 53
B. 80
C. 389
D. 443
E. 45
F. 3389
G. 8080
F. 3389
Explanation
The question specifies two requirements for the server:
It must be a secure web server.
The only permitted administration method is RDP.
Therefore, the firewall must allow traffic for both the secure web service and the RDP administration access.
D. 443 (Secure Web Server):
Port 443 is the standard, well-known port for HTTPS (Hypertext Transfer Protocol Secure). This is the protocol used for a secure web server, typically using TLS/SSL encryption.
F. 3389 (RDP Administration):
Port 3389 is the standard, well-known port for the Remote Desktop Protocol (RDP), which is required to fulfill the administrative access requirement.
Why the others are incorrect:
A. 53:
Used for DNS (Domain Name System). Essential for network function, but not required specifically for RDP or the secure web service itself.
B. 80:
Used for HTTP (non-secure web traffic). This is explicitly not needed for a secure web server, which uses 443.
C. 389:
Used for LDAP (Lightweight Directory Access Protocol). Used for directory services, but not required for RDP or the web server function itself.
E. 45:
Not a standard, well-known port for common server services.
G. 8080:
A common alternative/proxy port for web traffic, but 443 is the standard for a secure server.
Reference
This question relates to the Server Networking and Server Security domains of the CompTIA Server+ (SK0-005) exam, specifically the knowledge of well-known ports and security best practices:
Domain 2.0: Server Security (Configuring firewalls/security groups).
Domain 4.0: Troubleshooting (Diagnosing network connectivity based on ports).
Key Ports:
TCP 443 (HTTPS): Secure communication.
TCP 3389 (RDP): Remote Windows server administration.
Which of the following cloud models is BEST described as running workloads on resources that are owned by the company and hosted in a company-owned data center, as well as on rented servers in another company's data center?
A. Private
B. Hybrid
C. Community
D. Public
Explanation
The scenario explicitly describes two environments:
Company-owned resources in a company-owned data center → private cloud.
Rented servers in another company’s data center → public cloud (e.g., AWS, Azure, GCP).
Running workloads across both = hybrid cloud.
Why the other options are incorrect:
A. Private
Only uses company-owned and company-hosted resources.
Does not include rented external servers.
C. Community
A shared cloud infrastructure for a specific group (e.g., government agencies, universities).
Not defined by ownership or rental — irrelevant here.
D. Public
Only uses third-party rented resources (e.g., AWS EC2).
Does not include on-premises company-owned systems.
B. Hybrid (Correct)
Definition:
Integration of on-premises (private) and public cloud services, with orchestration between them.
Reference:
CompTIA Server+ SK0-005 Official Study Guide
Chapter 7: Cloud Computing
“Hybrid cloud: A combination of private cloud (on-premises) and public cloud services.”
CompTIA Server+ Exam Objectives (SK0-005)
1.4 Compare and contrast cloud computing concepts.
Hybrid: “Uses both on-premises and off-premises (public cloud) resources.”
NIST Special Publication 800-145
“The hybrid cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities but are bound together.”
A Linux server was recently updated. Now, the server stops during the boot process with a blank screen and an£s> prompt. When of the following is the MOST likely cause of this issue?
A. The system is booting to a USB flash drive
B. The UEFI boot was interrupted by a missing Linux boot file
C. The BIOS could not find a bootable hard disk
D. The BIOS firmware needs to be upgraded
Explanation
The grub> prompt is the minimal command-line interface for the GRUB (Grand Unified Bootloader) 2 bootloader. When you see this prompt, it means GRUB has started but has failed to load its normal menu and proceed with the boot process. This is almost always caused by a corruption or misconfiguration of GRUB's core files.
The key clue in the question is that the server "was recently updated." A system update, especially a kernel update, can sometimes:
Fail to update the GRUB configuration file (grub.cfg).
Corrupt the GRUB core image.
Point to a kernel or initramfs file that no longer exists or is in the wrong location.
Because GRUB cannot find the necessary files to continue the boot process, it drops the user to the rescue prompt (grub>).
Why the other options are incorrect:
A. The system is booting to a USB flash drive:
If the system were booting from a non-bootable USB drive, the firmware (BIOS/UEFI) would typically display an error like "No bootable device found" or "Remove disks or other media," not the GRUB prompt. The grub> prompt is specific to the GRUB bootloader, which resides on the server's hard drive.
C. The BIOS could not find a bootable hard disk:This error occurs at an earlier stage, before any bootloader is run. The firmware itself would display an error message, and you would never see a grub> prompt.
D. The BIOS firmware needs to be upgraded:
A firmware upgrade is rarely needed and is not typically triggered by a system update. Firmware issues usually manifest as hardware detection failures or an inability to POST, not by loading a specific bootloader's rescue shell.
Reference
This scenario is a common troubleshooting issue covered under CompTIA Server+ Domain 3.0: Server Maintenance, specifically dealing with the implications of OS and application patches. It tests the understanding of the Linux boot process and the critical role of the bootloader. The resolution often involves using the grub> prompt to locate the correct disk, partition, and kernel files to boot the system manually, and then reinstalling or reconfiguring GRUB to fix the issue permanently.
A technician has moved a data drive from a new Windows server to an orderWindows server. The hardware recognizes the drive, but the data is not visible to the OS. Which of the following is the MOST Likely cause of the issue?
A. Thedisk uses GPT.
B. Thepartition is formatted with ext4.
C. The -partition is formatted with FAT32.
D. Thedisk uses MBn.
Explanation:
If a data drive is physically recognized by Windows but its data is not visible or accessible within the operating system, the most likely cause is that the drive’s file system is not compatible with Windows. The ext4 file system is a Linux-based format that Windows cannot read natively without third-party software or drivers.
When a technician moves a drive from another server, Windows will detect the hardware at the storage level, but if the partition uses ext4, it will not mount or display the data in File Explorer or Disk Management beyond showing the drive as unallocated or unknown.
Why Not the Other Options:
A. The disk uses GPT:
Windows supports GPT (GUID Partition Table) on all 64-bit versions and with UEFI firmware. This would not prevent the OS from seeing the data.
C. The partition is formatted with FAT32:
FAT32 is fully supported by Windows; the data would be visible, though with some file size and volume limitations.
D. The disk uses MBR:
Windows also supports MBR (Master Boot Record); this would not cause invisibility of data.
Reference:
CompTIA Server+ SK0-005 Exam Objectives – Domain 3.2: Install and Configure Server Operating Systems.
This topic covers identifying and resolving file system compatibility issues when migrating or integrating storage devices across different operating systems.
Which of the following should be configured in pairs on a server to provide network redundancy?
A. MRU
B. SCP
C. DLP
D. CPU
E. NIC
Explanation
To provide network redundancy on a server, components should be configured in pairs so that if one fails, the other can immediately take over. The component responsible for physical network connectivity is the Network Interface Card (NIC).
NIC (E): Network Interface Cards are configured in pairs (or more) and often managed via a technique called NIC Teaming (or bonding/aggregation).
NIC Teaming allows multiple physical NICs to be logically grouped. This provides redundancy (if one NIC fails, traffic shifts to the other) and can also offer load balancing (spreading network traffic across the cards).
Why the others are incorrect:
MRU (A):
Maximum Receive Unit is a size parameter for network packets, not a component used for hardware redundancy.
SCP (B):
Secure Copy Protocol is a network protocol for securely transferring files, not a hardware component.
DLP (C):
Data Loss Prevention is a set of policies and software tools, not a hardware component configured for network redundancy.
CPU (D):
The Central Processing Unit is the brain of the server. While high-end servers can have multiple CPUs, this configuration provides processing power redundancy/scalability, not specifically network redundancy.
Reference
This question relates to the Server Hardware and Networking domain of the CompTIA Server+ (SK0-005) exam, specifically the methods used to ensure high availability and fault tolerance:
Domain 1.0: Server Administration (Implementing fault tolerance).
Key Concept:
Redundancy is achieved by duplicating critical single points of failure. For networking, the NIC is the critical component that is paired for high availability using NIC Teaming/Bonding.
An administrator receives an alert stating a S.MAR.T. error has been detected. Which of the following should the administrator run FIRST to determine the issue?
A. A hard drive test
B. A RAM test
C. A power supply swap
D. A firmware update
Explanation
A S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) error indicates that the storage device itself has detected abnormal operating conditions based on internal monitoring.
These conditions can include:
Increasing bad sectors
Excessive reallocated sectors
Slow read/write response times
High temperature thresholds exceeded
Mechanical issues such as spindle or actuator problems
Imminent drive failure predictions
S.M.A.R.T. is specifically designed to warn ahead of failure to allow the administrator to take action before data loss occurs.
Therefore, the first appropriate step is to run a vendor-recommended comprehensive hard drive diagnostic test such as:
smartctl for Linux
Windows CHKDSK + manufacturer’s diagnostic utilities
RAID controller diagnostic if enterprise storage
This confirms whether the S.M.A.R.T. alert is a false positive or proof of serious degradation. Based on test results, the administrator will typically immediately back up data and replace the failing disk.
Incorrect Options
B. A RAM test
Memory tests like MemTest86 check system RAM for:
Parity/ECC errors
Timing faults
Faulty DIMM modules
RAM errors produce symptoms such as application crashes, BSODs, or data corruption in memory transactions — not S.M.A.R.T. alerts.
S.M.A.R.T. warnings originate only from storage media, so RAM diagnostics would not address the actual alert source.
C. A power supply swap
While unstable power can indirectly cause system failures or corrupted writes, it:
Does not generate S.M.A.R.T. errors
Is not a recommended first troubleshooting action
Replacing a PSU without confirming the reported drive issue could waste time and allow a failing drive to worsen.
D. A firmware update
Updating drive firmware can:
Improve compatibility
Fix performance issues
Address known bugs
However — firmware changes should never be performed on a drive suspected of failing. If the disk is unstable, a firmware flash could:
Completely brick the drive
Trigger immediate data loss
Prevent future recovery attempts
That is why diagnostics must be performed before ANY changes.
Summary
A S.M.A.R.T. alert is an internal failure prediction from the hard drive itself.
The correct action is to:
Run a complete hard drive health diagnostic
Evaluate test results
Back up and replace the drive if failure is confirmed
Choosing another diagnostic first delays necessary recovery action
Which of the following BEST describes a guarantee of the amount of time it will take to restore a downed service?
A. RTO
B. SLA
C. MTBF
D. MTTR
Explanation:
A Service Level Agreement (SLA) is a formal contract between a service provider and a customer. It defines the specific level of service expected, including performance metrics and remedies for failure. A guarantee of the maximum time allowed to restore a service after an outage, often referred to as the "Recovery Time Objective (RTO)," is a common and critical component of an SLA. The SLA is the document that officially guarantees this time.
Incorrect Options:
A. RTO (Recovery Time Objective):
This is the target time you aim for to restore a service, but it is an internal goal, not the formal guarantee to the customer.
The RTO is often defined within the SLA.
C. MTBF (Mean Time Between Failures):
This is a reliability metric that predicts the average time between one system failure and the next.
It describes how often a component fails, not how long it takes to fix it.
D. MTTR (Mean Time To Repair):
This is a metric for the average time it takes to repair a failed component and restore it to normal operation.
Like MTBF, it is a historical average used for planning, not a guaranteed maximum time for service restoration.
Reference:
While specific guarantees are detailed in individual contracts, the definition of an SLA as the document containing these guarantees is standard. For authoritative information, you can refer to the official CompTIA Server+ (SK0-005) Exam Objectives under the domain 4.0 "Disaster Recovery," which covers business continuity concepts like SLAs.
| Page 16 out of 50 Pages |
| SK0-005 Practice Test | Previous |