Think You're Ready?

Your Final Exam Before the Final Exam.
Dare to Take It?

Topic 1: Exam Set A

A server administrator is configuring a new server that will hold large amounts of information. The server will need to be accessed by multiple users at the same time. Which of the following server roles will the administrator MOST likely need to install?

A. Messaging

B. Application

C. Print

D. Database

D.   Database

Explanation:

The scenario describes a server that:

Holds large amounts of information (high data volume).
Must be accessed by multiple users at the same time (concurrent access).

These requirements are characteristic of a database server, whose primary role is to store, manage, and retrieve structured data efficiently for multiple simultaneous client connections.

Why the other options are incorrect:

A. Messaging
Messaging servers (e.g., Microsoft Exchange, Postfix) handle email, instant messaging, or message queuing. They do not primarily store or manage large datasets for concurrent user access.

B. Application
Application servers host and execute business logic or web applications (e.g., IIS, Apache Tomcat). While they may connect to databases, they do not store large amounts of structured data themselves.

C. Print
Print servers manage print queues and shared printers. They have no role in storing or providing access to large datasets.

D. Database (Correct)
Database servers (e.g., Microsoft SQL Server, MySQL, PostgreSQL, Oracle) are specifically designed to:

Store large volumes of structured data.
Support concurrent access via client connections.
Provide data integrity, indexing, and query optimization.

Reference:
CompTIA Server+ SK0-005 Official Study Guide
Chapter 1:
Server Hardware and Chapter 5: Server Roles Database servers are explicitly identified as a core server role for managing large, shared datasets with multi-user access.

CompTIA Server+ Exam Objectives (SK0-005)
1.3 Compare and contrast server roles and requirements for each.

Database server:
"Supports multiple simultaneous connections, high I/O, large storage capacity."

A datacenter in a remote location lost power. The power has since been restored, but one ol the servers has not come back online. After some investigation, the server is found to still be powered off. Which of the following is the BEST method to power on the server remotely?

A. Crash cart

B. Out-of-band console

C. IP KVM

D. RDP

B.   Out-of-band console

Explanation

An out-of-band (OOB) management console provides a dedicated, separate channel for managing a server's hardware, independent of the server's main operating system. This is typically achieved through a dedicated network interface (like an iDRAC for Dell, iLO for HPE, or IPMI for generic servers) that allows an administrator to perform actions such as power cycling, accessing the BIOS, and mounting virtual media, even when the server is completely powered off.

In this specific scenario:

The server is in a remote datacenter.
It is powered off and did not restart after a power outage.
The operating system is not running, so OS-dependent tools like RDP are useless.
An out-of-band management controller, if configured, would still have power (often from the standby power of the PSU) and network connectivity. This allows an administrator to log in remotely and literally press a "Power On" button from the web interface.

Why the other options are incorrect:

A. Crash cart:
A crash cart is a physical mobile station (monitor, keyboard, mouse) that is rolled up to a server for direct, local console access. This is not a remote solution and is impossible to use for a server in a remote location.

C. IP KVM:
An IP-based Keyboard, Video, and Mouse (KVM) switch allows for remote console access. While this is a valid remote management tool, it often requires the server to be powered on to at least a certain state to display video. A completely powered-off server may not be accessible via a standard IP KVM. Furthermore, OOB management is considered a more modern and integrated "best" practice for this specific task.

D. RDP (Remote Desktop Protocol):
RDP is a protocol for connecting to a fully booted and running Windows operating system. Since the server is powered off, there is no OS to connect to, making RDP completely ineffective.

Reference
This question falls under CompTIA Server+ Domain 5.0: Disaster Recovery, specifically addressing recovery methodologies and the use of remote access tools to manage server hardware when the OS is unavailable. Out-of-band management is a critical concept for any server administrator, especially for managing infrastructure in remote or lights-out datacenters.

A server that recently received hardware upgrades has begun to experience random BSOD conditions. Which of the following are likely causes of the issue? (Choose two.)

A. Faulty memory

B. Data partition error

C. Incorrectly seated memory

D. Incompatible disk speed

E. Uninitialized disk

F. Overallocated memory

A.   Faulty memory
C.   Incorrectly seated memory

Explanation:

When a server begins to experience random Blue Screen of Death (BSOD) errors after hardware upgrades, the most probable cause lies in recently modified or newly installed components. Memory (RAM) issues are among the most common triggers of BSOD conditions, as system stability relies heavily on consistent and error-free access to memory resources.

A. Faulty memory:
Defective or failing RAM modules can cause data corruption in active processes, resulting in critical system crashes. Even a single bad memory cell can produce unpredictable behavior and BSOD errors under load. Running diagnostics such as Windows Memory Diagnostic or Memtest86 can help confirm faulty memory as the cause.

C. Incorrectly seated memory:
When new memory is installed but not firmly seated in the motherboard slots, intermittent contact can lead to unstable system behavior. This often results in crashes that appear random and are difficult to trace. Reseating the memory and ensuring proper installation is a key first step in hardware troubleshooting.

Incorrect Options:

B. Data partition error:
Would typically cause data access or boot issues but not BSODs related to hardware faults.

D. Incompatible disk speed:
Can lead to performance degradation, but not critical system failures.

E. Uninitialized disk:
This only affects disk usability and would not trigger system crashes.

F. Overallocated memory:
More applicable to virtual environments, not to physical hardware operations that cause BSODs.

Reference:
CompTIA Server+ SK0-005 Exam Objectives – Domain 2.1:
Install and Configure Server Hardware and Storage Technologies. This domain covers troubleshooting of hardware-related issues such as faulty RAM, improper installation, and memory compatibility problems that often result in BSODs after hardware upgrades.

A security analyst suspects a remote server is running vulnerable network applications. The analyst does not have administrative credentials for the server. Which of the following would MOST likely help the analyst determine if the applications are running?

A. User account control

B. Anti-malware

C. A sniffer

D. A port scanner

D.   A port scanner

Explanation

The analyst needs to determine if network applications are running on a remote server without administrative credentials.

Port Scanner (D):
A port scanner is a tool designed to probe a server or host for open ports. When a port is open, it indicates that a service or application (like a web server on port 80/443, SSH on port 22, or FTP on port 21) is actively running and listening for connections. This can be done remotely and without administrative credentials, making it the most suitable tool for this task. Tools like Nmap are common port scanners.

A Sniffer (C):
A network sniffer (or protocol analyzer) captures and inspects network traffic. While it can see traffic to the server, it would only confirm applications are running if they are actively communicating and the sniffer is on the same segment or an intermediary point. A port scan is a direct, active method to check for listening services, which is a much more effective first step.

Anti-malware (B):
Anti-malware software is designed to detect and remove malicious programs. It operates on the server and requires installation and typically administrative privileges, so it would not help a remote analyst who lacks access.

User Account Control (A):
User Account Control (UAC) is a security feature in Windows operating systems that prevents unauthorized changes to the system. It is a local access control mechanism and is irrelevant to remotely identifying running services.

Reference
This question relates directly to the Server Security and Troubleshooting domain of the SK0-005 exam, specifically the use of network reconnaissance tools:
Domain 2.0: Server Security (Identifying and using common security tools).
Domain 4.0: Troubleshooting (Diagnosing network service issues).

Key Concept:
Port scanning is a fundamental technique for network discovery and security auditing, providing a direct map of a server's exposed services.

A technician has received multiple reports of issues with a server. The server occasionally has a BSOD, powers off unexpectedly, and has fans that run continuously. Which of the following BEST represents what the technician should investigate during troubleshooting?

A. Firmware incompatibility

B. CPU overheating

C. LED indicators

D. ESD issues

B.   CPU overheating

Explanation:

The reported symptoms are:

Occasional BSOD (Blue Screen of Death) → often caused by hardware instability or thermal throttling.
Unexpected power-off → common with thermal shutdown protection in modern CPUs and motherboards.
Fans running continuously at high speed → indicates the system is trying to cool critical components (especially the CPU).

These are classic signs of CPU (or other component) overheating.

Why the other options are incorrect:

A. Firmware incompatibility
Firmware issues (e.g., BIOS/UEFI bugs) can cause crashes or boot failures, but not continuous high fan speed or thermal shutdowns. Fan behavior is directly tied to temperature sensors.

C. LED indicators
LED diagnostics help identify boot or hardware faults, but they are a diagnostic tool, not the root cause. The question asks what to investigate, not what to observe.

D. ESD issues
Electrostatic discharge can damage components during installation, but it would cause immediate or intermittent failures, not temperature-related symptoms like high fan noise and thermal shutdowns.

B. CPU overheating (Correct)
Overheating triggers:

Thermal throttling → instability → BSOD.
Critical temperature shutdown → unexpected power-off.
Fan curve escalation → fans run at 100% to cool the CPU.

Reference:
CompTIA Server+ SK0-005 Official Study Guide Chapter 3: Server Maintenance → Section on Thermal Issues "Overheating is indicated by high fan speeds, system shutdowns, and BSODs. Check CPU heatsink, thermal paste, and airflow."

CompTIA Server+ Exam Objectives (SK0-005)
4.2 Given a scenario, troubleshoot common hardware failures.

Symptoms of overheating include:

High fan noise
Random reboots/shutdowns
BSOD with thermal-related stop codes

Which of the following must a server administrator do to ensure data on the SAN is not compromised if it is leaked?

A. Encrypt the data that is leaving the SAN

B. Encrypt the data at rest

C. Encrypt the host servers

D. Encrypt all the network traffic

B.   Encrypt the data at rest

Explanation

The question focuses on protecting the data on the SAN itself in the event it is "leaked." In a SAN context, "leaked" typically refers to a physical compromise, such as someone stealing a disk drive or an entire storage array from the data center. It could also refer to unauthorized access to the storage system's backend.

Data at rest refers to data that is not actively moving between devices or networks and is stored on a physical medium—in this case, the disks within the SAN.

Encrypting data at rest ensures that the data is unreadable without the proper decryption keys, even if the physical storage media is stolen or otherwise physically accessed. This directly addresses the threat of the data being "compromised if it is leaked."

Why the other options are incorrect:

A. Encrypt the data that is leaving the SAN:
This describes data in motion (e.g., over the Fibre Channel or iSCSI network). While important for preventing eavesdropping on the network, it does not protect the data stored on the physical disks within the SAN if they are physically stolen.

C. Encrypt the host servers:
This is vague. If it means encrypting the operating system drives of the servers connected to the SAN, that protects the local server data, not the data residing on the separate SAN storage.

D. Encrypt all the network traffic:
This is similar to option A. It protects data in transit between the server and the SAN, but it does nothing to protect the data stored on the SAN's physical disks.

Reference
This question aligns with CompTIA Server+ Domain 2.0: Security, specifically concerning data security and protection methods. The core concept tested is the understanding of different encryption states:

Data at Rest:
Protected by storage-level encryption (e.g., self-encrypting drives, SAN controller-based encryption).
Data in Transit:
Protected by network-level encryption (e.g., IPsec, FC-SP).

For the specific scenario of data being "leaked" from the SAN, protecting data at rest is the primary and most effective control.

Due to a recent application migration, a company’s current storage solution does not meet the necessary requirements tor hosting data without impacting performance when the data is accessed in real time by multiple users. Which of the following is the BEST solution for this Issue?

A. Install local external hard drives for affected users.

B. Add extra memory to the server where data is stored.

C. Compress the data to increase available space.

D. Deploy a new Fibre Channel SAN solution.

D.   Deploy a new Fibre Channel SAN solution.

Explanation:

The scenario indicates that after an application migration, the company’s existing storage solution cannot support real-time, multi-user access without performance degradation. This type of issue is typically caused by limited I/O throughput, high latency, or inadequate bandwidth on the current storage system. To address this, implementing a Fibre Channel Storage Area Network (SAN) is the most effective and enterprise-grade solution.

A Fibre Channel SAN provides high-speed, low-latency, and dedicated connectivity between servers and storage devices. Unlike traditional NAS or direct-attached storage, SANs use a dedicated network fabric designed solely for storage traffic, ensuring data access remains fast and consistent even when multiple users access the same data concurrently. SANs are also scalable, allowing additional storage and servers to be integrated seamlessly as the organization grows. Moreover, Fibre Channel SANs offer advanced redundancy, failover, and data integrity features critical for business continuity and performance-sensitive applications.

Incorrect Options:

A. Install local external hard drives for affected users:
This option decentralizes data, complicates management, backups, and security, and does not solve the performance bottleneck for shared data access.

B. Add extra memory to the server where data is stored:
While increasing memory may enhance caching, it cannot fix the fundamental storage throughput issue caused by a slow or overloaded storage backend.

C. Compress the data to increase available space:
Data compression helps conserve storage capacity but does not address the need for faster access or simultaneous multi-user performance improvements.

Reference:
CompTIA Server+ SK0-005 Exam Objectives – Domain 2.3:
Summarize Storage Solutions and Their Features. Fibre Channel SANs are noted for their high performance, scalability, and ability to handle enterprise workloads that demand fast, reliable, and concurrent data access across multiple users or applications.

Which of the following DR testing scenarios is described as verbally walking through each step of the DR plan in the context of a meeting?

A. Live failover

B. Simulated failover

C. Asynchronous

D. Tabletop

D.   Tabletop

Explanation

The scenario described—verbally walking through each step of the Disaster Recovery (DR) plan in the context of a meeting—is the definition of a Tabletop Exercise.

Tabletop Exercise (D):
This is a discussion-based drill where key personnel meet in a conference room to review the DR plan. They discuss their roles, responsibilities, and the specific actions they would take in response to a simulated disaster. No actual systems or resources are engaged; it's a walk-through to validate the plan's logic and personnel's understanding.

Live Failover (A):
This is a full, real-world test where production services are actually failed over to the disaster recovery site, often taking the primary site completely offline. This is the most comprehensive but riskiest form of testing.

Simulated Failover (B):
This involves using a copy of the production environment or dedicated test systems to perform the failover steps. It validates the technical procedures without affecting the live production environment.

Asynchronous (C):
This is a term related to data replication technology, not a DR testing scenario. Asynchronous replication means data is copied to the secondary site after it has been written to the primary storage, often with a slight delay.

Reference
This question pertains to the Disaster Recovery (DR) and Business Continuity (BC) aspects of the CompTIA Server+ (SK0-005) exam, specifically the methods used for testing and validating resilience plans:
Domain 3.0: Storage (Understanding backup and recovery concepts).

Key Concept:
DR testing ensures that the organization can successfully recover its critical services after an outage. Tabletop exercises are the least intrusive and often the first step in a comprehensive testing strategy.

A technician recently upgraded several pieces of firmware on a server. Ever since the technician rebooted the server, it no longer communicates with the network. Which of the following should the technician do FIRST to return the server to service as soon as possible?

A. Replace the NIC

B. Make sure the NIC is on the HCL

C. Reseat the NIC

D. Downgrade the NIC firmware

D.   Downgrade the NIC firmware

Explanation

The timeline is critical:

The server was working fine until firmware was upgraded.
Immediately after reboot, network communication failed.

This strongly indicates the NIC firmware update introduced a bug, incompatibility, or corruption — a common post-update failure mode.

Step-by-Step Reasoning:

Root cause isolation:
Only the firmware changed → suspect the new NIC firmware.

Fastest path to recovery:
Revert the last known good state → downgrade/rollback the NIC firmware.
This avoids unnecessary hardware replacement or reseating (which won’t fix a firmware bug).

Why the other options are incorrect or less appropriate first steps:

A. Replace the NIC
Overkill and time-consuming. The NIC was working before the firmware flash. Replacing hardware should be a last resort.

B. Make sure the NIC is on the HCL
Irrelevant — the NIC was already in use and functional before the update. HCL compliance doesn’t change after a firmware flash.

C. Reseat the NIC
A basic hardware troubleshooting step, but not the most likely fix when the failure began immediately after a firmware update. Reseating won’t undo corrupted firmware.

D. Downgrade the NIC firmware (Correct)
Best and fastest first step:

Uses the previous working firmware version (often stored in backup or vendor rollback package).
Directly addresses the change that caused the failure.
Standard practice in enterprise environments after failed firmware updates.

Reference:
CompTIA Server+ SK0-005 Official Study Guide Chapter 3: Server Maintenance → Firmware Updates “If a system fails to function after a firmware update, rollback to the previous version as the first recovery step.”
CompTIA Server+ Exam Objectives (SK0-005) 3.3 Given a scenario, perform server maintenance.
“Apply patches and updates… and rollback if necessary.”
4.1 Given a scenario, troubleshoot common server issues.
Network failure post-firmware update → revert firmware.

A server administrator is configuring the IP address on a newly provisioned server in the testing environment. The network VLANs are configured as follows:

The administrator configures the IP address for the new server as follows:

IP address: 192.168.1.1/24

Default gateway: 192.168.10.1

A ping sent to the default gateway is not successful. Which of the following IP address/default gateway combinations should the administrator have used for the new server?

A. IP address: 192.168.10.2/24Default gateway: 192.168.10.1

B. IP address: 192.168.1.2/24Default gateway: 192.168.10.1

C. IP address: 192.168.10.3/24Default gateway: 192.168.20.1

D. IP address: 192.168.10.24/24Default gateway: 192.168.30.1

A.   IP address: 192.168.10.2/24Default gateway: 192.168.10.1

Explanation

The core issue here is a network misconfiguration. The server and its default gateway are on different logical networks, which prevents communication.

Let's analyze the original, faulty configuration:

Server IP: 192.168.1.1/24

The /24 subnet mask (255.255.255.0) means the first three octets define the network.
Therefore, the server is on the 192.168.1.0 network.

Default Gateway: 192.168.10.1

This IP address falls within the 192.168.10.0 network.
A device can only communicate directly (without a router) with other devices on the same local network. Since the server (192.168.1.1) and the gateway (192.168.10.1) are on different networks (192.168.1.0 vs. 192.168.10.0), the server's ping request cannot even reach the gateway.
For the server to successfully use the default gateway 192.168.10.1, the server's IP address must be on the same network as the gateway. In this case, that network is 192.168.10.0/24.

Analysis of the Correct Option (A):

IP address: 192.168.10.2/24:
This IP is part of the 192.168.10.0/24 network.

Default gateway: 192.168.10.1:
This gateway IP is also part of the 192.168.10.0/24 network.

Now both the server and its gateway are on the same local network (192.168.10.0), so the server can successfully send traffic to the gateway.

Why the other options are incorrect:

B. IP address: 192.168.1.2/24 Default gateway: 192.168.10.1:
This repeats the original mistake. The server (192.168.1.2) is on a different network than the gateway (192.168.10.1).

C. IP address: 192.168.10.3/24 Default gateway: 192.168.20.1:
Here, the server is on the 192.168.10.0 network, but the gateway is on the 192.168.20.0 network. They are mismatched.

D. IP address: 192.168.10.24/24 Default gateway: 192.168.30.1:
Similarly, the server (192.168.10.0 network) and gateway (192.168.30.0 network) are on different networks.

Reference
This question tests your knowledge of IP addressing and subnetting, a fundamental concept covered in CompTIA Server+ Domain 1.0: Server Hardware Installation and Management and foundational for all network administration. The key principle is that for a host to use a default gateway, both must reside on the same local IP subnet. The default gateway is a host's "door" to other networks, but the host must be in the same "room" (subnet) as the door to use it.

Page 14 out of 50 Pages
SK0-005 Practice Test Previous