Security controls in a data center are being reviewed to ensure data is properly protected and that human life considerations are included. Which of the following best describes how the controls should be set up?
A. Remote access points should fail closed.
B. Logging controls should fail open.
C. Safety controls should fail open.
D. Logical security controls should fail closed.
Explanation:
The principle of "fail-safe" or "fail-secure" is applied differently depending on the type of control. For safety controls, which are designed to protect human life and physical well-being, the default behavior during a failure (e.g., power loss, system malfunction) must be to fail open. This means that in the event of a failure, the control defaults to a state that allows people to escape or remain safe. A classic example is a mantrap or emergency exit door; if the power fails, the door must unlock (fail open) to allow people to exit, rather than trapping them inside a potentially dangerous situation like a fire.
Analysis of Incorrect Options:
A. Remote access points should fail closed:
This is generally correct for security but not directly related to human life. Remote access points (like VPN gateways) should fail closed (deny access) to prevent unauthorized entry if a system failure occurs. However, this prioritizes security over safety and is not the best answer given the explicit requirement for "human life considerations."
B. Logging controls should fail open:
Logging controls are detective, not preventive. There is no universal "fail open" or "fail closed" state for logging. If a logging system fails, it typically stops recording events (which is a failure), but this does not directly impact safety or access. The phrase "fail open" does not logically apply to logging.
D. Logical security controls should fail closed:
This is a correct security principle for logical (technical) controls like firewalls or authentication systems. They should fail closed (deny access) to maintain security in the event of a failure. However, again, this does not address the "human life considerations" highlighted in the question.
Reference:
This question integrates concepts from Domain 1.0: General Security Concepts (security controls) and physical safety principles. The key is understanding the critical distinction between:
Fail-Secure (Fail Closed):
Preferred for security controls (e.g., doors remain locked during power loss to prevent unauthorized access).
Fail-Safe (Fail Open):
Required for safety controls (e.g., doors unlock during power loss to allow evacuation).
This balance between security and life safety is a fundamental aspect of physical security design in data centers and other facilities.
Which of the following is the first step to take when creating an anomaly detection process?
A. Selecting events
B. Building a baseline
C. Selecting logging options
D. Creating an event log
Explanation: The first step in creating an anomaly detection process is building a baseline of normal behavior within the system. This baseline serves as a reference point to identify deviations or anomalies that could indicate a security incident. By understanding what normal activity looks like, security teams can more effectively detect and respond to suspicious behavior.
Which of the following methods would most likely be used to identify legacy systems?
A. Bug bounty program
B. Vulnerability scan
C. Package monitoring
D. Dynamic analysis
Explanation:
The correct answer is B. Vulnerability scan.
A vulnerability scan is an automated, high-level test that proactively scans a network to identify known vulnerabilities, misconfigurations, and missing patches in systems, applications, and network devices.
Identifying legacy systems is a primary function of a vulnerability scanner. These tools work by probing IP addresses and comparing the responses (e.g., open ports, running services, banner information, system responses) against a database of known signatures.
Legacy systems are characterized by outdated operating systems (e.g., Windows XP, Windows 7, old Linux kernels), end-of-life software, and services running outdated protocols. A vulnerability scanner would quickly flag these systems for running unsupported OS versions, having known critical vulnerabilities for which no patch exists, or using insecure ciphers and protocols (e.g., SSLv2, TLS 1.0, SMBv1).
The scan report would provide a clear inventory of these non-compliant, legacy systems, allowing the security team to prioritize them for remediation, isolation, or replacement.
Why the other options are incorrect:
A. Bug bounty program:
A bug bounty program is a crowdsourced initiative where external security researchers are incentivized to find and report vulnerabilities in a company's public-facing applications (e.g., websites, web apps). It is not a method for discovering internal, networked legacy systems. These programs are targeted and scoped, not broad network discovery tools.
C. Package monitoring:
Package monitoring tools track software packages and dependencies on a system, often for the purpose of managing updates or detecting unauthorized software changes. While it could tell you that an individual system has old software installed, it is not an efficient method for discovering and inventorying all legacy systems across an entire network. You would first need to know which systems to point the monitor at.
D. Dynamic analysis:
Dynamic analysis is a security testing method that involves executing code or running software to analyze its behavior for vulnerabilities. It is used primarily on applications (e.g., web apps, binaries) in a sandboxed environment to find flaws like memory leaks or input validation errors. It is not a network discovery tool and is not used to identify legacy operating systems or network devices.
Reference:
This aligns with the purpose of vulnerability scanning as defined in the CompTIA Security+ SY0-701 objectives, particularly under Domain 1.1: Given a scenario, analyze indicators of malicious activity. Part of threat intelligence and analysis is knowing your attack surface, which is impossible without a complete inventory of assets, including legacy systems. Vulnerability management programs, which start with scanning, are the primary method for achieving this.
Tools like Nessus, Qualys, and OpenVAS are classic examples of vulnerability scanners that excel at identifying legacy systems and reporting on their associated risks
An organization maintains intellectual property that it wants to protect. Which of the following concepts would be most beneficial to add to the company's security awareness training program?
A. Insider threat detection
B. Simulated threats
C. Phishing awareness
D. Business continuity planning
Explanation:
Why A is Correct:
Intellectual property (IP) is most vulnerable to threats from within an organization. Insiders (employees, contractors) have legitimate access to sensitive data and are therefore in the best position to steal it, whether maliciously or accidentally. Adding insider threat detection to security awareness training educates employees on:
Recognizing behaviors that may indicate an insider threat (e.g., unauthorized data access, attempts to bypass controls).
Understanding the policies and procedures for protecting IP.
Knowing how to report suspicious activity anonymously.
This focus directly addresses the primary risk to intellectual property by turning the entire workforce into a proactive layer of defense.
Why B is Incorrect:
Simulated threats (like phishing simulations) are a valuable training methodology for teaching employees to recognize attacks. However, it is a technique, not a core concept. The question asks for the most beneficial concept to add. While simulated phishing could be part of training, the overarching need is to address the specific risk of IP theft by insiders.
Why C is Incorrect:
Phishing awareness is critical for defending against external threats that try to trick employees into revealing credentials or installing malware. While important, it is not the most beneficial concept for protecting intellectual property. IP is more often compromised through intentional insider theft, accidental leakage, or poor internal controls than through phishing alone.
Why D is Incorrect:
Business continuity planning (BCP) focuses on maintaining operations during and after a disaster (e.g., natural disaster, cyberattack). It is about availability and recovery, not primarily about protecting the confidentiality of intellectual property from theft or leakage.
Reference:
This question falls under Domain 5.0: Governance, Risk, and Compliance (GRC), specifically covering security awareness and training programs tailored to organizational risks. Protecting intellectual property requires a strong focus on insider risk, making insider threat detection a key training topic.
A company wants to verify that the software the company is deploying came from the vendor the company purchased the software from. Which of the following is the best way for the company to confirm this information?
A. Validate the code signature.
B. Execute the code in a sandbox.
C. Search the executable for ASCII strings.
D. Generate a hash of the files.
Explanation:
A) Validate the code signature is the correct answer.
Code signing is a process where software vendors digitally sign their software using a private key. The corresponding public key is used to verify the signature. By validating the code signature, the company can:
Authenticate the source:
Confirm the software indeed came from the claimed vendor.
Ensure integrity:
Verify that the software has not been tampered with since it was signed by the vendor.
This provides a direct and reliable method to verify both the origin and integrity of the software.
Why the others are incorrect:
B) Execute the code in a sandbox:
Sandboxing is used to observe the behavior of software in an isolated environment (e.g., to detect malware). It does not verify the source of the software—only how it behaves.
C) Search the executable for ASCII strings:
This might reveal metadata or human-readable text (e.g., vendor names) but is easily spoofed and not a secure method for verification. Attackers can embed false information in malicious software.
D) Generate a hash of the files:
Hashing (e.g., SHA-256) can verify integrity (that the file hasn’t changed) if the company has a trusted hash provided by the vendor. However, it does not authenticate the source. If the company obtains the hash from an untrusted location (e.g., a compromised website), it could be misled. Code signing combines authentication and integrity.
Reference:
This question tests knowledge of Domain 3.2: Given a scenario, implement security hardening strategies and Domain 2.8: Summarize the basics of cryptographic concepts. Code signing is a industry-standard practice for verifying software provenance and is emphasized in the SY0-701 objectives for secure software deployment. It leverages public key infrastructure (PKI) to provide trust.
A small business uses kiosks on the sales floor to display product information for customers. A security team discovers the kiosks use end-of-life operating systems. Which of the following is the security team most likely to document as a security implication of the current architecture?
A. Patch availability
B. Product software compatibility
C. Ease of recovery
D. Cost of replacement
Explanation:
An end-of-life (EOL) or end-of-service-life (EOSL) operating system no longer receives security patches, updates, or vulnerability fixes from the vendor. This is the most critical security implication because it means any newly discovered vulnerabilities in the OS will remain unpatched, leaving the kiosks permanently exposed to exploits. Attackers often target EOL systems precisely because they know these vulnerabilities will never be fixed.
Why the others are incorrect:
B) Product software compatibility:
While compatibility might be a concern for functionality, it is not the primary security implication. The question specifically asks for a security implication, and the lack of patches is a direct and severe security risk.
C) Ease of recovery:
This refers to how quickly a system can be restored after a failure. While EOL systems might be harder to recover due to outdated drivers or lack of support, this is an operational concern, not the most direct security implication.
D) Cost of replacement:
This is a financial or business consideration. While upgrading from EOL systems incurs costs, the security team's focus in documentation would be on the risk (e.g., unpatched vulnerabilities), not the financial impact.
Reference:
This aligns with SY0-701 Objective 2.3 ("Explain security implications of embedded and specialized systems"). Kiosks are a type of specialized system, and using EOL software is a major vulnerability. The security implication of missing patches and the inability to remediate vulnerabilities is a core concept in risk management and is emphasized in frameworks like NIST SP 800-40 (Guide to Enterprise Patch Management Planning).
Which of the following exercises should an organization use to improve its incident response process?
A. Tabletop
B. Replication
C. Failover
D. Recovery
Explanation:
A tabletop exercise is a discussion-based session where members of the incident response (IR) team and other key stakeholders (e.g., management, legal, PR) walk through a simulated incident scenario. The goal is to review and validate the incident response plan, identify gaps or ambiguities in procedures, improve communication and coordination among teams, and ensure everyone understands their roles and responsibilities. This type of exercise is specifically designed to improve the process of incident response without the pressure of a real event.
Analysis of Incorrect Options:
B. Replication:
Replication refers to the process of copying data to a secondary location (e.g., for backups or disaster recovery). It is a technical capability for ensuring data availability but is not an exercise designed to improve human-driven processes like incident response.
C. Failover:
Failover is an automated process where operations are switched from a primary system to a redundant or standby system in the event of a failure. Like replication, this is a technical mechanism for maintaining availability and is part of disaster recovery planning, not an IR process improvement exercise.
D. Recovery:
Recovery is a phase within the incident response lifecycle (NIST SP 800-61) where systems are restored and returned to normal operation. It is an action taken during or after an incident, not an exercise used to practice and improve the overall response process.
Reference:
This question falls under Domain 4.0: Security Operations, specifically objective 4.4: Explain key aspects of the incident response process. Tabletop exercises are a core component of the Preparation phase of the incident response lifecycle. They are widely recommended by frameworks like NIST to ensure an organization is ready to handle a real incident effectively. Other exercise types include drills (focused on a specific task) and full-scale simulations, but tabletops are the most common for testing and improving the IR process.
Which of the following would be the best ways to ensure only authorized personnel can access a secure facility? (Select two).
A. Fencing
B. Video surveillance
C. Badge access
D. Access control vestibule
E. Sign-in sheet
F. Sensor
D. Access control vestibule
Explanation:
The question asks for the best ways to ensure only authorized personnel can access a facility. This requires controls that actively verify identity and authorization before granting access, preventing unauthorized "tailgating."
C. Badge access is correct.
This is a form of electronic access control. An ID badge (often with a smart chip or magnetic stripe) is a credential that positively identifies the holder. When scanned at a door, the system checks the credential against an authorization database to determine if the person is allowed entry at that time and location. This is a direct and effective method for ensuring only authorized personnel gain access.
D. Access control vestibule (Mantrap) is correct.
An access control vestibule is a physical security system with two interlocking doors. An individual must authenticate (e.g., with a badge) to enter the first door. Once inside the small vestibule, the first door must close and lock before the individual can be authenticated again to open the second door. This highly effective design ensures only one person can enter at a time and prevents tailgating (unauthorized individuals following an authorized person inside).
Why the other options are less effective for ensuring only authorized access:
A. Fencing:
Fencing is a good deterrent and delay mechanism, but it does not actively identify or authorize individuals. It is a perimeter control, not an access control.
B. Video surveillance:
Surveillance is a detective control. It records who accessed an area but does nothing to prevent unauthorized access in real-time. It is used for after-the-fact investigation.
E. Sign-in sheet:
This is an administrative control that relies on honesty and provides no verification. An unauthorized person can easily write a fake name. It offers no physical barrier to entry.
F. Sensor:
Sensors (e.g., motion, light, temperature) are typically detective or monitoring controls. They might alert to presence or an environmental change but cannot identify or authorize personnel to prevent entry.
Reference:
CompTIA Security+ SY0-701 Objective 2.5: "Explain the purpose of mitigation techniques used to secure the enterprise." This objective includes physical security controls like mantraps (access control vestibules) and electronic access systems (badge access) as key methods for protecting secure areas.
A new vulnerability enables a type of malware that allows the unauthorized movement of data from a system. Which of the following would detect this behavior?
A. Implementing encryption
B. Monitoring outbound traffic
C. Using default settings
D. Closing all open ports
Explanation:
The scenario describes malware that exfiltrates data, meaning it moves data out of the system without authorization. The most direct way to detect this behavior is by monitoring outbound traffic. Security tools like a firewall, intrusion detection system (IDS), or data loss prevention (DLP) system can analyze network traffic leaving the organization. They can detect anomalies such as:
Unusually large data transfers.
Data being sent to suspicious or unauthorized external IP addresses.
Traffic using non-standard ports or protocols for exfiltration.
Why not A?
Implementing encryption: Encryption protects the confidentiality of data by making it unreadable if intercepted. However, it does not detect the movement of data; encrypted data can still be exfiltrated without triggering an alert.
Why not C?
Using default settings: Default settings on systems and applications are often insecure and well-known to attackers. Using them might make a system more vulnerable to infection but does not help detect data exfiltration after the malware is already present.
Why not D?
Closing all open ports: While this is a good hardening practice to reduce the attack surface, it is often impractical (e.g., web servers need port 80/443 open). More importantly, sophisticated malware can use allowed ports (like HTTPS on port 443) to blend in with normal traffic. Closing ports is a preventive measure, not a detective one.
Reference:
Domain 4.3: "Given an incident, utilize appropriate data sources to support an investigation." Monitoring network traffic (especially outbound) is a primary data source for detecting indicators of compromise (IOCs), such as data exfiltration. This aligns with the Security+ objective of using continuous monitoring to identify malicious activity.
A network administrator deployed a DNS logging tool that togs suspicious websites that are visited and then sends a daily report based on various weighted metrics. Which of the following best describes the type of control the administrator put in place?
A. Preventive
B. Deterrent
C. Corrective
D. Detective
Explanation: The tool that the network administrator deployed is described as one that logs suspicious websites and sends a daily report based on various weighted metrics. This fits the description of a detective control. Detective controls are designed to identify and log security events or incidents after they have occurred. By analyzing these logs and generating reports, the tool helps in detecting potential security breaches, thus allowing for further investigation and response.
| Page 16 out of 72 Pages |
| SY0-701 Practice Test | Previous |