Resources Articles Hardening Systems through Security Benchmarks

Hardening Systems through Security Benchmarks

Image representing security hardening


System hardening is the process of configuring a system to a more secure state. Many technology solutions are not securely configured by default, so system administrators must harden systems while retaining their desired functionality. Thankfully, system administrators do not have to figure out system hardening on their own. Instead, they can reference security benchmarks which describe recommended secure configurations for a system. This article describes how to harden systems through security benchmarks.


System Hardening Workflow

When performing system hardening, consider following these steps:

  1. Select a benchmark
  2. Perform a manual or automated review of the benchmark
  3. Implement desired configuration changes
  4. Create a hardened image from the newly configured system


1.    Select a Benchmark

Security benchmarks are lists of recommended configuration settings for a system. They are written in many formats, such as .xlsx, .pdf, .xml, admx, .md, and more. Reputable benchmarks are often authored by product vendors, government agencies, and industry organizations. Selecting which benchmark to use for a system can be challenging because many different organizations publish security benchmarks, more than one benchmark can exist for a system, and the desired baseline configuration of a system varies based on an organization’s unique needs and risk profile. Despite these challenges, there are several well-regarded organizations whose benchmarks are widely used across the industry.


NIST’s National Checklist Program (NCP) is a database of publicly available security benchmarks. While the NCP doesn’t publish its own benchmarks, it aggregates some of the best benchmarks made available by other organizations. Around 800 benchmarks are in the NCP for almost 200 different systems. With easy-to-use search and filter capabilities, the NCP is a great place for system administrators to start their benchmark search.


The Center for Internet Security (CIS) produces some of the most widely used benchmarks in the industry. Several hundred different benchmarks exist for almost 100 different systems. The benchmarks are freely available in .pdf format, although CIS membership is required to download the benchmarks in .xlsx and .xml.


Security Technical Implementation Guides (STIGs) are developed by the U.S. Defense Information Systems Agency (DISA), part of the U.S. Department of Defense (DoD). Around 450 STIGs are available for a many different systems. While DoD agencies are required to follow STIGs, other organizations can voluntarily choose to follow them, too.

Microsoft Security Baselines

Microsoft Security Baselines define recommend registry settings for the latest versions of Windows and Windows Server.

Cloud Security Baselines

Cloud solutions also have their own security benchmarks authored by a variety of organizations:

  • Microsoft Cloud Security Baselines – defines recommended configuration settings in Azure
  • AWS Foundation Benchmark – maintained by CIS and defines recommended configuration settings in AWS
  • GCP Foundation Benchmark – maintained by CIS and defines recommended configuration settings in GCP
  • M365 and GWS Benchmark – maintained by CISA’s Secure Cloud Business Applications (SCuBA) project and defines recommend configuration settings in M365 and GWS

Overall, system administrators using security benchmarks for the first time should probably start with a benchmark from one of the aforementioned organizations. Unless some legal or regulatory obligation demands the use of a particular benchmark, system administrators have leeway when deciding which benchmark to use. For those unsure what benchmarks are available for a system, NIST’s NCP is a great place to start the search.


2.   Perform a Manual or Automated Review of the Benchmark

Once a benchmark is selected it’s time to compare a system’s current configurations against those recommended by the benchmark. This comparison can be performed manually or with the help of an automated tool. Regardless of the method chosen, it’s important to track the results of the comparison, noting where current and recommended configurations match and where they differ. This record keeping establishes a baseline from which decisions to adopt or reject the recommended configurations can be made.

Manual Review

Performing a manual review of a security benchmark means going line-by-line through the benchmark’s recommended configurations an comparing them against a system. Manual reviews can be a time-consuming process as some benchmarks have hundreds of recommended configurations. When performing a manual review, try to gain access to the benchmark in .xlsx form to enable easy manual tracking of results. Some benchmarks, like Microsoft Security Baselines, are freely available in .xlsx form, while others, like CIS’, require membership for that format.

Automated Review

Thankfully, there are many different tools that enable an automated comparison of a system against a benchmark which greatly speeds up the process. While the automated tools all report results in a variety of formats, they all provide some way of seeing the percentage of system configurations aligned or not with a benchmark. There are several common automated benchmark tools used throughout the industry.

Vulnerability Management Compliance Scans

Most vulnerability scanning solutions (such as those provided by Nessus, Rapid7, Qualys, etc.) can also perform “compliance scans”, which perform automated comparisons of system against a benchmark. Cloud providers also have their own compliance scanning solutions, such as AWS Inspector and GCP Kubernetes Security Posture Scans. Compliance scans operate and report results similarly to vulnerability scans, albeit the scans run against a benchmark as opposed to using plugins designed to enable the detection of known vulnerabilities.

CIS-CAT Lite and CIS-CAT Pro

CIS’ Configuration Assessment Tool (CAT) enables automated reviews of CIS benchmarks. The CIS-CAT Lite is freely available, but it can only scan against a few CIS benchmarks and has limited reporting capabilities. The CIS-CAT Pro has more features but requires CIS membership.


DISA provides the SCAP Compliance Checker (SCC) for enabling the automated comparison of a system against a STIG. The tool is free to use and has extensive reporting capabilities.

Microsoft Security Compliance Toolkit

Microsoft’s Security Compliance Toolkit contains both the Microsoft Security Baselines and a variety of free tools to enable automated use of them. For example, the Toolkit contains PolicyAnalyzer which scans a system against the baselines and LGPO which enables the import of the recommended registry settings.


CISA’s SCuBA project contains the free to use ScubaGear tool for automated comparisons of M365 applications against the CISA ScuBA M365 benchmark and the ScubaGoggle tool for automated comparisons of GWS applications against the CISA SCuBA GWS benchmark.


3.   Implement Desired Configuration Changes

Once a manual or automated review of a benchmark is conducted, the results will likely show many current system configurations not in alignment with the benchmark’s recommended ones. System administrators should take care when deciding which configuration changes to make. Unfortunately, it’s not as simple as just blindly following the benchmark and making all of the recommended configuration changes (unless a legal or regulatory obligation demands it). Given the complexity of modern IT systems and an organization’s unique business needs and risk profile, it may make sense to not implement some of the benchmark’s recommended configurations due to the impact they would have on desired system functionality. Therefore, each recommended configuration should be carefully considered, and, if possible, tested to determine impact. Most benchmarks include rationale statements, remediation instructions, and links to further reading for each of their recommended configurations to help system administrators understand the impact of each configuration.

When a final list of desired configuration changes is made, be sure to seek approval via a change management process before implementing the new configurations.


4.   Create a Hardened Image from the Newly Configured System

Now that the system is hardened against a benchmark its new state should be considered the new baseline. System administrators should create an image of the system, if able, to use when onboarding similar assets. The hardened system image should be periodically compared against a benchmark, such as annually or whenever major benchmark or system changes occur, to see if the baseline is as secure as possible.



Overall, system hardening is an important exercise system administrators should perform. Security benchmarks define commonly approved secure system configurations and can serve as system hardening guides. Manual or automated reviews of benchmarks can occur, and regardless of the process all findings should be documented. Once the review is complete, system administrators must determine which of the recommended configurations to actually adopt depending on their impact to desired system functionality. After the secure configurations are implemented, the system should be considered newly baselined and any other assets onboarded should conform with the new configurations.


We Can Help

Sedara’s Security Operations Center (SOC) can help you harden your systems. We can perform compliance scans to detect whether your systems are configured securely and advise on which recommended security settings to apply.


More Reading on This Topic


Accomplish your security & compliance goals.

Get a Demo