Checks We Perform

Below is a list of all the checks we perform. Select one to view more information.
Risk Assessment
1.0 Is the scope of the risk assessment defined? Does it focus on any particular department, product, or geography?

Description:

Risk Assessments are mandated by several frameworks, including SOC 1, SOC 2, PCI DSS, ISO 27001, HIPAA, FISMA, and more. Many of these frameworks require that your assessment is based on an industry standard methodology, such as NIST 800-30, ISO 27005, OCTAVE, etc.
Risk Assessment
1.6 Does the assessment address operational, financial, regulatory, and technical risk?

Description:

The risk assessment must address the potential for risks in all areas of the organization in order to be truly effective. A comprehensive risk assessment should address operational, financial, regulatory, and technical risk.
Logical Access
1.1 Ensure that corporate login credentials are used

Description:

It is recommended fully-managed corporate Google accounts be used for increased visibility, auditing, and controlling access to Cloud Platform resources. Email accounts based outside of the user's organization, such as personal accounts, should not be used for business purposes.
Logical Access
1.1 Avoid Use of the Root Account

Description:

Recommendation 1.1 of the CIS AWS Foundations Benchmark states that because the root account has unrestricted access to all resources in the AWS account, use of this account should be avoided. The root account is the most privileged AWS account. Minimizing the use of this account and adopting the principle of least privilege for access management will reduce the risk of accidental changes and unintended disclosure of highly privileged credentials.

For more information, visit the AWS documentation on AWS root accounts.
Logical Access
1.2 Ensure MFA is Enabled for All IAM Users

Description:

Recommendation 1.2 of the CIS AWS Foundations Benchmark states that because MFA adds an extra layer of protection on top of a user name and password, MFA needs to be enabled for all IAM users with a console password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential.

For more information, visit the AWS documentation on using MFA.

Logical Access
1.2 Ensure that multi-factor authentication is enabled for all non-service accounts

Description:

Setup multi-factor authentication for Google Cloud Platform accounts.

Multi-factor authentication requires more than one mechanism to authenticate a user. This secures user logins from attackers exploiting stolen or weak credentials. 
Logical Access
1.3 Ensure Credentials Unused for 90+ Days are Disabled

Description:

Recommendation 1.3 of the CIS AWS Foundations Benchmark states that all credentials that have been unused in 90 or greater days should be removed or deactivated. Disabling or removing unnecessary credentials will reduce the window of opportunity for credentials associated with a compromised or abandoned account to be used.

For more information, visit the AWS documentation for finding unused credentials.

Logical Access
1.3 Ensure that Security Key Enforcement is enabled for all admin accounts

Description:

Setup Security Key Enforcement for Google Cloud Platform admin accounts. 

Google Cloud Platform users with Organization Administrator roles have the highest level of privilege in the organization. These accounts should be protected with the strongest form of two-factor authentication: Security Key Enforcement. Ensure that admins use Security Keys to log in instead of weaker second factors like SMS or one-time passwords (OTP). Security Keys are actual physical keys used to access Google Organization Administrator Accounts. They send an encrypted signature rather than a code, ensuring that logins cannot be phished. 
Logical Access
1.3 Ensure guest users are reviewed on a monthly basis

Description:

Azure AD is extended to include Azure AD B2B collaboration, allowing you to invite people from outside your organization to be guest users in your cloud account and sign in with their own work, school, or social identities. Guest users allow you to share your company's applications and services with users from any other organization, while maintaining control over your own corporate data.

Work with external partners, large or small, even if they don't have Azure AD or an IT department. A simple invitation and redemption process lets partners use their own credentials to access your company's resources a a guest user.
Logical Access
1.4 Ensure IAM Password Policies Require At Least 1 Lowercase Letter, 1 Uppercase Letter, 1 Number, 1 Symbol, and is At Least 10 Characters Long

Description:

The CIS AWS Foundations Benchmark explains that password policies need to enforce password complexity requirements in order to increase account resiliency against brute force login attempts. Industry best practice is to ensure that IAM password policies require at least one lowercase letter, one uppercase letter, one number, one symbol, and a minimum length of 10 characters.

Logical Access
1.4 Ensure that there are only GCP-managed service account keys for each service account

Description:

User managed service accounts should not have user-managed keys.
 
 Anyone who has access to the keys will be able to access resources through the service account. GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis. User-managed keys are created, downloadable, and managed by users. They expire 10 years from creation.
 For user-managed keys, the user has to take ownership of key management activities which include:
 - Key storage
 - Key distribution
 - Key revocation
 - Key rotation
 - Protecting the keys from unauthorized users
 - Key recovery
 Even with key owner precautions, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels.
 It is recommended to prevent user-managed service account keys 
Logical Access
1.5 Ensure that ServiceAccount has no Admin privileges.

Description:

Service accounts represent service-level security of the Resources (application or a VM) which can be determined by the roles assigned to it. Enrolling ServiceAccount with Admin rights gives full access to an assigned application or a VM. A ServiceAccount Access holder can perform critical actions like delete, update change settings, etc. without user intervention. For this reason, it's recommended that service accounts not have Admin rights. 
Logical Access
1.5 Ensure IAM Password Policies Prevent Password Reuse for the Last 24 Passwords

Description:

Industry best practices is to ensure that IAM password policies prevent password reuse for the last 24 passwords. The CIS AWS Foundations Benchmark explains that password policies need to prevent password reuse in order to increase account resiliency against brute force login attempts.

Logical Access
1.6 Ensure IAM Policies are Attached Only to Groups or Roles

Description:

Recommendation 1.16 of the CIS AWS Foundations Benchmark states that by default, IAM users, groups, and roles have no access to AWS resources. IAM policies are how privileges are granted to users, groups, or roles. To that end, IAM policies should only be attached to groups or roles – not users. Assigning privileges at the group or role level reduces the complexity of access management as the number of users grow. Reducing access management complexity may in-turn reduce opportunity for a principal to inadvertently receive or retain excessive privileges.

For more information, visit the AWS documentation for managing IAM policies.

Logical Access
1.6 Ensure that IAM users are not assigned Service Account User role at project level

Description:

It is recommended to assign the Service Account User (iam.serviceAccountUser) and Service Account Token Creator (iam.serviceAccountTokenCreator) roles to a user for a specific service account rather than assigning the role to a user at project level. 
Logical Access
1.7 Ensure IAM Instance Roles are Used for AWS Resource Access

Description:

Recommendation 1.19 of the CIS AWS Foundations Benchmark states that by ensuring IAM instance roles are used for AWS resource access, you reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it. Additionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials.

For more information, visit the AWS documentation for using IAM roles.

Logical Access
1.7 Ensure user-managed/external keys for service accounts are rotated every 90 days or less

Description:

Service Account keys consist of a key ID (Private_key_Id) and Private key, which are used to sign programmatic requests users make to Google cloud services accessible to that particular service account. It is recommended that all Service Account keys are regularly rotated. 
Logical Access
1.8 Ensure that Separation of duties is enforced while assigning service account related roles to users

Description:

It is recommended that the principle of 'Separation of Duties' is enforced while assigning service-account related roles to users.
 
The built-in/predefined IAM role Service Account admin allows the user/identity to create, delete, and manage service account(s). The built-in/predefined IAM role Service Account User allows the user/identity (with adequate privileges on Compute and App Engine) to assign service account(s) to Apps/Compute Instances.

Separation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action. In Cloud IAM - service accounts, this could be an action such as using a service account to access resources that user should not normally have access to.

Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice.

No user should have Service Account Admin and Service Account User roles assigned at the same time. 
Logical Access
1.8 Ensure IAM Policies that Allow Full "*:*" Administrative Privileges are Not Created

Description:

Following the industry standard of least privilege, recommendation 1.22 of the CIS AWS Foundations Benchmark states that IAM policies that allow full administrative privileges should not be created. The CIS recommends to first determine what users need to do, and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges. It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later. Specifically, IAM policies that have a statement with "Effect": "Allow" with "Action": "*" over "Resource": "*" should be removed.

For more information, visit the AWS documentation for managing IAM policies.

Logical Access
1.9 Ensure S3 IAM Policies Do Not Grant Access to All Buckets

Description:

S3 bucket policies and user policies are what determine the permissions and access to S3. S3 bucket policies are attached at the bucket level and apply to all objects within that bucket. Our recommendation is to ensure that S3 bucket policies do no grant access to all buckets.

For more information, visit the AWS documentation on S3 bucket policy examples.

Logical Access
1.9 Ensure that Cloud KMS cryptokeys are not anonymously or publicly accessible

Description:

It is recommended that the IAM policy on Cloud KMS cryptokeys should restrict anonymous and/or public access.

Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. Such access might not be desirable if sensitive data is stored at the location. In this case, ensure that anonymous and/or public access to a Cloud KMS cryptokey is not allowed. 
Logical Access
1.10 Ensure Encryption keys are rotated within a period of 90 days

Description:

Google Cloud Key Management Service stores cryptographic keys in a hierarchical structure designed for useful and elegant access control management. 

The format for the rotation schedule depends on the client library that is used. For the gcloud command-line tool, the next rotation time must be in ISO or RFC3339 format, and the rotation period must be in the form INTEGER[UNIT], where units can be one of seconds (s), minutes (m), hours (h) or days (d). 
Logical Access
1.10 Ensure an Organization Has Been Created Within AWS to House Accounts

Description:

To securely create a multi-account AWS environment, we recommend housing AWS accounts within an organization. The AWS Organizations feature provides centralized account management and governance. It allows you to group multiple accounts into an organizational units (OUs) and, in turn, apply policies to OUs instead of directly to accounts. This type of consolidation becomes extremely valuable as you scale your AWS resources and architecture.
Logical Access
1.11 Ensure that Separation of duties is enforced while assigning KMS related roles to users

Description:

It is recommended that the principle of 'Separation of Duties' is enforced while assigning KMS related roles to users. 

The built-in/predefined IAM role Cloud KMS Admin allows the user/identity to create, delete, and manage service account(s). The built-in/predefined IAM role Cloud KMS CryptoKey Encrypter/Decrypter allows the user/identity (with adequate privileges on concerned resources) to encrypt and decrypt data at rest using an encryption key(s). 

The built-in/predefined IAM role Cloud KMS CryptoKey Encrypter allows the user/identity (with adequate privileges on concerned resources) to encrypt data at rest using an encryption key(s). The built-in/predefined IAM role Cloud KMS CryptoKey Decrypter allows the user/identity (with adequate privileges on concerned resources) to decrypt data at rest using an encryption key(s). 

Separation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action. In Cloud KMS, this could be an action such as using a key to access and decrypt data a user should not normally have access to. Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice. 

No user(s) should have Cloud KMS Admin and any of the Cloud KMS CryptoKey Encrypter/Decrypter, Cloud KMS CryptoKey Encrypter, Cloud KMS CryptoKey Decrypter roles assigned at the same time. 
Logical Access
1.12 Ensure API keys are not created for a project

Description:

Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to use standard authentication flow instead.
 
 Security risks involved in using API-Keys appear below:
 - API keys are simple encrypted strings
 - API keys do not identify the user or the application making the API request
 - API keys are typically accessible to clients, making it easy to discover and steal an API key
 To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. 
Logical Access
1.12 Ensure Access to EBS Snapshots is Restricted

Description:

AWS does allow you to make your EBS snapshots public, but we recommend restricting access to EBS snapshots by making them private, unless you have a specific business need that requires this. These point-in-time snapshots are crucial to your data backup and recovery processes; you don’t want to risk public access or permissions misconfigurations here.

For more information, visit the AWS documentation on EBS snapshots.   
Logical Access
1.13 Ensure IAM Password Policy Expires Password Within 90 Days or Less

Description:

Recommendation 1.11 of the CIS AWS Foundations Benchmark states that IAM password policies can require passwords to be rotated or expired after given a specified number of days. It’s recommended that password policy expire passwords after 90 days or fewer. Decreasing the password lifetime increases account resiliency against brute force login attempts. Additionally, requiring regular password changes helps scenarios like stolen passwords, redundant use of passwords across accounts, and compromised end users.
Logical Access
1.13 Ensure API keys are restricted to use by only specified Hosts and Apps

Description:

Unrestricted keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API key usage to trusted hosts, HTTP referrers and apps.
 
 Security risks involved in using API-Keys appear below:
 - API keys are simple encrypted strings
 - API keys do not identify the user or the application making the API request
 - API keys are typically accessible to clients, making it easy to discover and steal an API key
 In light of these potential risks, Google recommends using the standard authentication flow instead of API keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to
 authenticate to that API.
 In order to reduce attack vectors, API-Keys can be restricted only to trusted hosts, HTTP referrers and applications. 
Logical Access
1.14 Ensure No Root Account Access Key Exists

Description:

Recommendation 1.12 of the CIS AWS Foundations Benchmark recommends that all access keys associated with the root account be removed. The root account is the most privileged user in the AWS account, and Access Keys provide programmatic access to a given account. That said, if the account is compromised, by removing the access keys associated with the root account, it limits vectors by which the attackers can act on. Additionally, by removing the root access keys, it encourages the creation and use of role based accounts that are least privileged.
Logical Access
1.14 Ensure API keys are restricted to only APIs that application needs access

Description:

API keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API keys to use (call) only APIs required by an application.
 
 Security risks involved in using API-Keys are below:
 - API keys are simple encrypted strings
 - API keys do not identify the user or the application making the API request
 - API keys are typically accessible to clients, making it easy to discover and steal an API key
 In light of these potential risks, Google recommends using the standard authentication flow instead of API-Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
 In order to reduce attack surfaces by providing least privileges, API-Keys can be restricted to use (call) only APIs required by an application. 
Logical Access
1.15 Ensure API keys are rotated every 90 days

Description:

It is recommended to rotate API keys every 90 days.
 
 Security risks involved in using API-Keys are listed below:
 - API keys are simple encrypted strings
 - API keys do not identify the user or the application making the API request
 - API keys are typically accessible to clients, making it easy to discover and steal an API key
 Because of these potential risks, Google recommends using the standard authentication flow instead of API Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
 Once a key is stolen, it has no expiration, meaning it may be used indefinitely unless the project owner revokes or regenerates the key. Rotating API keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.
 API keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen. 
Logical Access
1.16 Ensure MFA is Enabled for the "root" Account

Description:

Recommendation 1.13 of the CIS AWS Foundations Benchmark states that the root account is the most privileged user in an AWS Account. MFA adds an additional layer of security on top of a username and password. When a user signs into an AWS website/account, if MFA is enabled, they will be prompted for their username and password following an authentication code from their AWS MFA device.

Note:
When virtual MFA is used for root accounts, make sure NOT to use a personal device, but rather a dedicated mobile device (tablet/phone) that is securely independent from any other devices.

 
Logical Access
1.17 Ensure Hardware MFA is Enabled for the "root" Account

Description:

Recommendation 1.14 of the CIS AWS Foundations Benchmark states that the root account is the most privileged user in an AWS Account. MFA adds an additional layer of security on top of a username and password. When a user signs into an AWS website/account, if MFA is enabled, they will be prompted for their username and password following an authentication code from their AWS MFA device. For Level 2, it is recommended that the root account be protected with a hardware MFA. 
Logical Access
1.21 Ensure that no custom subscription owner roles are created

Description:

Subscription ownership should not include permission to create custom owner roles. The principle of least privilege should be followed and only necessary privileges should be assigned instead of allowing full administrative access.
Logical Access
2.15 Ensure that 'All users with the following roles' is set to 'Owner'

Description:

Enable security alert emails to subscription owners.
Logical Access
4.1 Ensure that instances are not configured to use the default compute engine service account with full access to all Cloud APIs

Description:

It is recommended to configure your instance to not use the default Compute Engine service account because it has the Editor role on the project. 
Logical Access
4.2 Ensure that instances are not configured to use the default service account with full access to all Cloud APIs

Description:

To support principle of least privileges and prevent potential privilege escalation it is recommended that instances are not assigned to default service account Compute Engine default service account with Scope Allow full access to all Cloud APIs. 
Logical Access
4.4 Ensure that Azure Active Directory Admin is configured

Description:

Use Azure Active Directory Authentication for authentication with SQL Database.
Logical Access
6.1.1 Ensure that MySql database instance does not allow anyone to connect with administrative privileges.

Description:

It is recommended to set a password for the administrative user (root by default) to prevent unauthorized access to the SQL database instances. 

This recommendation is applicable only for MySQL Instances. PostgreSQL does not offer any setting for No Password from the cloud console. 
Logical Access
8.5 Enable role-based access control (RBAC) within Azure Kubernetes Services

Description:

Ensure that RBAC is enabled on all Azure Kubernetes Services Instances
Logical Access
9.4 Ensure the web app has 'Client Certificates (Incoming client certificates)' set to 'On'

Description:

Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app.
Logical Access
9.5 Ensure that Register with Azure Active Directory is enabled on App Service

Description:

Managed service identity in App Service makes the app more secure by eliminating secrets from the app, such as credentials in the connection strings. When registering with Azure Active Directory in the app service, the app will connect to other Azure services securely without the need of username and passwords.
Risk Assessment Standards
1.1 Does the assessment follow an industry standard methodology? (i.e. NIST 800-30, ISO 27005, OCTAVE)

Description:

Risk Assessments are mandated by several frameworks, including SOC 1, SOC 2, PCI DSS, ISO 27001, HIPAA, FISMA, and more. Many of these frameworks require that your assessment is based on an industry standard methodology, such as NIST 800-30, ISO 27005, OCTAVE, etc.
Risk Assessment Standards
1.5 Was the risk of fraud assessed in order to satisfy the requirements of the COSO Internal Control framework?

Description:

Internal Control best practices require the organization to assess the risk of fraud. This could include the risk of incentives, pressures, attitudes, and rationalizations that could influence someone to commit fraud. When a service organization undergoes a SOC 1 or SOC 2 audit, auditors will be inspecting the risk assessment to determine if it incorporates the COSO Internal Framework requirement, which states, “the entity considers the potential for fraud in assessing risks to the achievement of objectives.”
Policy Administration
1.1 Is the scope of the information security policy defined? Does it focus on particular departments, products, or geography?

Description:

An information security policy is key to an effective security compliance management program. The policy should describe the security standards with which the organization intends to comply. It is important to define the scope of your policy according to business needs, organizational structure, and compliance goals to ensure relevant aspects are not excluded.
Policy Administration
1.3 Has the policy been reviewed and approved by an appropriate board, executive, or committee? Is there executive endorsement?

Description:

The ultimate responsibility for policy falls on the board and executive management. The policy should document who has the responsibility to review and approve. Review and approval should occur at least annually to ensure your policies are relevant, accurate and current.
Policy Administration
1.4 Does the policy specify how it is distributed and communicated to relevant stakeholders? Is an acknowledgment required?

Description:

Once policies are clearly defined and approved, they must be distributed to stakeholders in a way that is accessible, understandable, and trackable. Capturing the reader's acknowledgement is a must for legal and compliance purposes.
Policy Administration
1.6 Does the policy assign responsibility for the information security program to and individual, such as a security officer?

Description:

Security compliance management requires leadership and clear communication with stakeholders throughout the organization. Your information security policy should assign responsibility for the security program to at least one of the following roles: A leader with authority to sponsor security compliance projects. This may be an executive or a security compliance steering team with executive support. A compliance manager or managers with information security expertise. The compliance manager is responsible for overseeing compliance projects that integrate security compliance throughout the business.
Policy Administration
1.7 Does the policy outline operationally-independent security reporting lines? Can the security officer communicate directly with the board?

Description:

Separation of duties for compliance and security personnel brings accountability to operational functions. When security or compliance reports to technology leadership, conflicts of interest may arise. At a minimum, a dotted line directly to the board is necessary to communicate findings that compromise the organization's objectives for compliance and results.
Asset Inventory
1.2 Has an asset inventory been performed to identity critical assets, such as people, locations, systems, data, and processes?

Description:

The first step in conducting a risk assessment is to identify and inventory your organization’s assets, which include your organization’s hardware, software, system interfaces, data, physical locations, personnel, cloud technologies, business processes, etc.
Laws and Regulations
1.2 Does the policy reference laws, regulations, or standards that impact the organization? (i.e. GLBA, HIPAA, GDPR, PCI DSS, etc.)

Description:

A security compliance program can be impacted by the regulatory or industry requirements facing the business. Specific control requirements, such as those found in frameworks like HIPAA or PCI DSS, should be included in your policies and procedures.
Laws and Regulations
1.12 Does the policy address security requirements for vendors, including due diligence and monitoring?

Description:

Effective compliance management includes the process by which organizations understand and control the risk associated with vendors, third parties, or business partners. While you can outsource processes, you can never outsource responsibility. Your policy should require vendors to understand and follow your requirements and support you in the achievement of compliance objectives.
Likelihood and Impact
1.3 Has an assessment been completed to establish the impact of threats realized? (i.e. Low, Moderate, High)

Description:

The impact of a threat can be described in terms of loss or degradation of one, or a combination, of the following three security goals: confidentiality, integrity, and availability. This review will evaluate how you quantify or qualify the impact.
Vulnerability and Threats
1.4 Is there a formal, documented risk register that ranks risk according to quantitative or qualitative measures?

Description:

Organizations should document and rank risk within an official risk register. This register should consider the vulnerability and impact to each asset for each threat realized. The resulting ranking provides management with a prioritized approach for budget and resources.
Vulnerability and Threats
1.9 Does the assessment include the risk posed by third parties and dependencies in systems, processes, or locations?

Description:

Organizations need to establish requirements for vendor and business partner engagements that ensure the risks associated with these partners are addressed and planned. If your third-party partner experiences a threat or breach, it may be an impact you. Your organization should include the risk posed by third parties in the formal risk assessment.
Personnel Issues
1.5 Does the policy specify that discipline is possible for failure to comply?

Description:

Effective internal control involves accountability. Your policies should define standards and allow for corrective measures when employees deviate from expectations. Articulating the progressive methods of discipline brings teeth to the policy.
Personnel Issues
1.8 How does the policy address security responsibilities for specific positions? Does it address security in development, for example?

Description:

Security is everyone's responsibility but the day-to day security procedures are determined by the respective role. Security responsibilities should be communicated specifically for how it applies to the given role. Many frameworks require specialized training for incident response teams, call center personnel, etc.
Personnel Issues
1.9 Does the policy address security requirements for WFH and remote access capabilities?

Description:

With the increase of remote workplaces comes a number of policies that need to be updated to encourage productivity, security, and efficiency. Whether it's desktops, laptops, tablets, or smartphones, employees must have a clear and thorough understanding of how they should use personal or company devices securely. The information security policy that you've developed for your company should be adjusted to fit the needs of your remote employees by providing a deeper focus on remote security. Defining access controls for remote employees is critical in environments the organization does not fully control.
Personnel Issues
1.11 How does the policy address personally owned devices accessing company systems and data?

Description:

Known as a Bring-Your-Own-Device (BYOD) policy, you should define a policy for how, when, and why personally owned devices access company systems and data. These policies should clearly articulate to the end-user the possibility for confiscation and/or investigation, and the legal rights to the digital records kept on the device.
Personnel Issues
1.13 Does the policy require security training for the workforce? What about job specific training, such as privacy, incident handling, or development?

Description:

Regular training is critical for maintaining awareness of expectations, trends, and threats facing our environment. A security awareness training program is one component that inspires and educates your employees to recognize and defend against threats. Additional types of training to include in policy are data handling procedures, incident handling, secure software development, and other role-specific topics.
Risk Treatment
1.7 Has a formal, documented risk treatment plan been developed to include current and planned controls to achieve acceptable risk levels?

Description:

After risks have been identified and assessed for importance and likelihood, a risk management action plan should be created. This formal, documented plan serves as the guide your organization will follow to mitigate any identified risks to achieve acceptable risk levels. You should develop security control recommendations to either mitigate, transfer, accept, or avoid the risk. These controls must be tested regularly to ensure they are suitably designed for your organization’s risks and that they are operating effectively. Your risk treatment plan can be utilized to track remediation efforts and adjust the controls when necessary.
Risk Treatment
1.8 Has a formal, documented Statement of Applicability (SoA) been written to identify framework controls that do or do not apply to the environment?

Description:

A formal, documented Statement of Applicability (SoA) should be written to identify the specific framework controls that do or do not apply to the environment. Justification should be documented when risk evaluation determines that a control does not apply.
Control Requirements
1.10 Does the policy define acceptable use of critical systems and company assets?

Description:

Your usage policies should detail acceptable uses of the technology at your organization. Acceptable use policies normally have users agree to not use the services for illegal purposes, not attempt to harm the security of the technology or system, and to report any suspicious activity. Many frameworks require acceptable use be communicated to protect the organization from misuse and unintentional outcomes.
Control Requirements
1.14 Does the policy require specific controls, both physical and logical, when accessing critical systems and environments?

Description:

Organizations must align their policies with the control frameworks applicable to their environment. Specific objectives should be communicated by policy so that procedures can be developed to support the expectation. For example, have you implemented a policy to require multi-factor authentication? Ultimately, physical and logical control requirements should be defined so that compliance goals are met.
Control Requirements
1.15 Does the policy provide guidance for password creation, change, and best practices?

Description:

Your information security program should address how your organization expects passwords to be managed. For example, do you have password policy enforcement? Do you have a password reset process? Do you allow fewer change intervals because there is a password breach monitoring process? Employees should clearly understand how to manage their passwords and make changes when compromises occur.
Control Requirements
1.16 Does the policy require audit logs and monitoring for anomalies for all critical systems?

Description:

Policies should require audit and accountability requirements in the form of system logs. Personnel deploying systems and applications should understand the organization’s requirement for log generation, centralization, and alerting to ensure proper configuration. Personnel responsible for monitoring systems need policies and procedures to define what is expected to be monitored and how log reviews occur. This documentation is integral to your network monitoring and incident response programs.
Control Requirements
1.17 Does the policy specify standards for configuring critical systems?

Description:

Manufacturer hardening standards and industry best practices are great benchmarks for system configuration. A good policy requires alignment with these industry standards and procedures guide the implementation steps for maintaining secure configuration.
Control Requirements
1.18 Does the policy address expectations for removable media usage, logging, and tracking?

Description:

It is important to communicate requirements in policies, so personnel are clear on how to protect against data loss. Empower your people to understand when and how removable media can be used. Instill the need for logging and tracking its use for compliance purposes to reduce the potential for data loss.
Control Requirements
1.19 Does the policy provide guidance for when and how to apply protections to sensitive data, such as encryption standards?

Description:

Your personnel hold the keys to protecting data during storage and transmission. Educating them on proper protection methodologies, such as encryption, is a must. Your policies and procedures should guide them on how and when to apply protections to protect against data loss.
Control Requirements
1.20 Does the policy specify security requirements when acquiring or developing new systems?

Description:

Any new system or application that is acquired or developed should be onboarded to the organization's security standards. Your policies should guide the personnel responsible for selecting these systems to ensure they can comply with the policies. The development team should follow policy to incorporate security in development. Testing strategies should be included in your policies so new systems are required to go through an evaluation to ensure proper security configuration.
Management Control
1.11 Ensure Users with Access Keys Enabled Require MFA for API Calls

Description:

Using IAM policies with MFA conditions, you can ensure that MFA is required for users who are able to make API calls. This will add an additional layer of protection and accountability before a user is allowed to perform sensitive API operations.
Configuration Management
1.22 Ensure Security Defaults is enabled on Azure Active Directory

Description:

Security defaults in Azure Active Directory (Azure AD) make it easier to be secure and help protect your organization. Security defaults contain preconfigured security settings for common attacks.

Microsoft is making security defaults available to everyone. The goal is to ensure that all organizations have a basic level of security-enabled at no extra cost. You turn on security defaults in the Azure portal.
Configuration Management
3.1 Ensure the default network does not exist in a project

Description:

To prevent use of default network, a project should not have a default network. 
Configuration Management
3.2 Ensure legacy networks does not exists for a project

Description:

In order to prevent use of legacy networks, a project should not have a legacy network configured. 

Legacy networks have a single network IPv4 prefix range and a single gateway IP address for the whole network. The network is global in scope and spans all cloud regions. Subnetworks cannot be created in a legacy network and are unable to switch from legacy to auto or custom subnet networks. Legacy networks can have an impact for high network traffic projects and are subject to a single point of contention or failure. 
Configuration Management
3.3 Ensure that DNSSEC is enabled for Cloud DNS

Description:

Cloud Domain Name System (DNS) is a fast, reliable and cost-effective domain name system that powers millions of domains on the internet. Domain Name System Security Extensions (DNSSEC) in Cloud DNS enables domain owners to take easy steps to protect their domains against DNS hijacking and man-in-the-middle and other attacks.
Configuration Management
3.4 Ensure that RSASHA1 is not used for key-signing key in Cloud DNS DNSSEC

Description:

DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong. 
Configuration Management
3.5 Ensure that RSASHA1 is not used for zone-signing key in Cloud DNS DNSSEC

Description:

DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong. 
Configuration Management
3.5 Ensure that 'Public access level' is set to Private for blob containers

Description:

Disable anonymous access to blob containers and disallow blob public access on storage account.
Configuration Management
3.6 Ensure default network access rule for Storage Accounts is set to 'Deny'

Description:

Restricting default network access helps to provide a new layer of security, since storage accounts accept connections from clients on any network. To limit access to selected networks, the default action must be changed.
Configuration Management
3.6 Ensure that SSH access is restricted from the internet

Description:

GCP Firewall Rules are specific to a VPC Network. Each rule either allows or denies traffic when its conditions are met. Its conditions allow the user to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. 

Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, only an IPv4 address or IPv4 block in CIDR notation can be used. Generic (0.0.0.0/0) incoming traffic from the internet to VPC or VM instance using SSH on Port 22 can be avoided. 
Configuration Management
3.7 Ensure that RDP access is restricted from the internet

Description:

GCP Firewall Rules are specific to a VPC Network. Each rule either allows or denies traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. 

Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an IPv4 address or IPv4 block in CIDR notation can be used. Generic (0.0.0.0/0) incoming traffic from the Internet to a VPC or VM instance using RDP on Port 3389 can be avoided. 
Configuration Management
3.9 Ensure no HTTPS or SSL proxy load balancers permit SSL policies with weak cipher suites

Description:

Secure Sockets Layer (SSL) policies determine what port Transport Layer Security (TLS) features clients are permitted to use when connecting to load balancers. To prevent usage of insecure features, SSL policies should use (a) at least TLS 1.2 with the MODERN profile; or (b) the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or (3) a CUSTOM profile that does not support any of the following features: TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA 
Configuration Management
4.3 Ensure 'Block Project-wide SSH keys' enabled for VM instances

Description:

It is recommended to use Instance specific SSH key(s) instead of using common/shared project-wide SSH key(s) to access Instances.
 
Project-wide SSH keys are stored in Compute/Project-meta-data. Project wide SSH keys can be used to login into all the instances within project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within project. It is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised. 
Configuration Management
4.4 Ensure oslogin is enabled for a Project

Description:

Enabling OS login binds SSH certificates to IAM users and facilitates effective SSH certificate management. 
Configuration Management
4.5 Ensure 'Enable connecting to serial ports' is not enabled for VM Instance

Description:

Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support. 

If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore interactive serial console support should be disabled. 
Configuration Management
4.6 Ensure that IP forwarding is not enabled on Instances

Description:

Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets. 

Forwarding of data packets should be disabled to prevent data loss or information disclosure. 
Configuration Management
4.7 Ensure VM disks for critical VMs are encrypted with CustomerSupplied Encryption Keys (CSEK)

Description:

Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. If you supply your own encryption keys, Google uses your key to protect the Google-generated keys used to encrypt and decrypt your data. By default, Google Compute Engine encrypts all data at rest. Compute Engine handles and manages this encryption for you without any additional actions on your part. However, if you wanted to control and manage this encryption yourself, you can provide your own encryption keys. 
Configuration Management
4.8 Ensure Compute instances are launched with Shielded VM enabled

Description:

To defend against advanced threats and ensure that the boot loader and firmware on your VMs are signed and untampered, it is recommended that Compute instances are launched with Shielded VM enabled. 
Configuration Management
5.1 Ensure that Cloud Storage bucket is not anonymously or publicly accessible

Description:

It is recommended that IAM policy on Cloud Storage bucket does not allows anonymous or public access. 

Allowing anonymous or public access grants permissions to anyone to access bucket content. Such access might not be desired if you are storing any sensitive data. Hence, ensure that anonymous or public access to a bucket is not allowed. 
Configuration Management
5.1.3 Ensure the storage container storing the activity logs is not publicly accessible

Description:

The storage account container containing the activity log export should not be publicly accessible.
Configuration Management
5.2 Ensure that Cloud Storage buckets have uniform bucket-level access enabled

Description:

It is recommended to use uniform bucket-level access to unify and simplify how you grant access to your Cloud Storage resources. 

Cloud Storage offers two systems for granting users permission to access your buckets and objects: Cloud Identity and Access Management (Cloud IAM) and Access Control Lists (ACLs). These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission. Cloud IAM is used throughout Google Cloud and allows you to grant a variety of permissions at the bucket and project levels. ACLs are used only by Cloud Storage and have limited permission options, but they allow you to grant permissions on a per-object basis. 

In order to support a uniform permissioning system, Cloud Storage has uniform bucketlevel access. Using this feature disables ACLs for all Cloud Storage resources: access to Cloud Storage resources then is granted exclusively through Cloud IAM. Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible either. 
Configuration Management
6.1 Ensure that RDP access is restricted from the internet

Description:

Disable RDP access on network security groups from the Internet.
Configuration Management
6.1.2 Ensure 'skip_show_database' database flag for Cloud SQL Mysql instance is set to 'on'

Description:

It is recommended to set skip_show_database database flag for Cloud SQL Mysql instance to on 

'skip_show_database' database flag prevents people from using the SHOW DATABASES statement if they do not have the SHOW DATABASES privilege. This can improve security if you have concerns about users being able to see databases belonging to other users. Its effect depends on the SHOW DATABASES privilege: If the variable value is ON, the SHOW DATABASES statement is permitted only to users who have the SHOW DATABASES privilege, and the statement displays all database names. If the value is OFF, SHOW DATABASES is permitted to all users, but displays the names of only those databases for which the user has the SHOW DATABASES or other privilege. This recommendation is applicable to Mysql database instances. 
Configuration Management
6.1.3 Ensure that the 'local_infile' database flag for a Cloud SQL Mysql instance is set to 'off'

Description:

The local_infile flag controls the server-side LOCAL capability for LOAD DATA statements. Depending on the local_infile setting, the server refuses or permits local data loading by clients that have LOCAL enabled on the client side. 

To explicitly cause the server to refuse LOAD DATA LOCAL statements (regardless of how client programs and libraries are configured at build time or runtime), start mysqld with local_infile disabled. local_infile can also be set at runtime. 

Due to security issues associated with the local_infile flag, it is recommended to disable it. This recommendation is applicable to MySQL database instances. 
Configuration Management
6.2 Ensure that SSH access is restricted from the internet

Description:

Disable SSH access on network security groups from the Internet.
Configuration Management
6.2.1 Ensure that the 'log_checkpoints' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling log_checkpoints causes checkpoints and restart points to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them. This parameter can only be set in the postgresql.conf file or on the server command line. This recommendation is applicable to PostgreSQL database instances. 
Configuration Management
6.2.2 Ensure 'log_error_verbosity' database flag for Cloud SQL PostgreSQL instance is set to 'DEFAULT' or stricter

Description:

The log_error_verbosity flag controls the verbosity/details of messages logged. Valid values are:

  • TERSE
  • DEFAULT
  • VERBOSE

TERSE excludes the logging of DETAIL, HINT, QUERY, and CONTEXT error information.
VERBOSE output includes the SQLSTATE error code, source code file name, function name, and line number that generated the error.
Ensure an appropriate value is set to 'DEFAULT' or stricter. 
Configuration Management
6.2.3 Ensure that the 'log_connections' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling the log_connections setting causes each attempted connection to the server to be logged, along with successful completion of client authentication. This parameter cannot be changed after the session starts. 
Configuration Management
6.2.4 Ensure that the 'log_disconnections' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling the log_disconnections setting logs the end of each session, including the session duration. 

PostgreSQL does not log session details such as duration and session end by default. Enabling the log_disconnections setting will create log entries at the end of each session which can be useful in troubleshooting issues and determine any unusual activity across a time period. The log_disconnections and log_connections work hand in hand and generally, the pair would be enabled/disabled together. This recommendation is applicable to PostgreSQL database instances.
Configuration Management
6.2.5 Ensure 'log_duration' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling the log_duration setting causes the duration of each completed statement to be logged. This does not logs the text of the query and thus behaves different from the log_min_duration_statement flag. This parameter cannot be changed after session start. 
Configuration Management
6.2.6 Ensure that the 'log_lock_waits' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling the log_lock_waits flag for a PostgreSQL instance creates a log for any session waits that take longer than the allotted deadlock_timeout time to acquire a lock. 
Configuration Management
6.2.7 Ensure 'log_statement' database flag for Cloud SQL PostgreSQL instance is set appropriately

Description:

The value of log_statement flag determined the SQL statements that are logged. Valid values are:

  • none
  • ddl
  • mod
  • all

The value ddl logs all data definition statements. The value mod logs all ddl statements, plus data-modifying statements.
The statements are logged after a basic parsing is done and statement type is determined, thus this does not logs statements with errors. When using extended query protocol, logging occurs after an Execute message is received and values of the Bind parameters are included.
A value of 'ddl' is recommended unless otherwise directed by your organization's logging policy. 
Configuration Management
6.2.8 Ensure 'log_hostname' database flag for Cloud SQL PostgreSQL instance is set appropriately

Description:

PostgreSQL logs only the IP address of the connecting hosts. The log_hostname flag controls the logging of hostnames in addition to the IP addresses logged. The performance hit is dependent on the configuration of the environment and the host name resolution setup. This parameter can only be set in the postgresql.conf file or on the server command line.
Configuration Management
6.2.9 Ensure 'log_parser_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'

Description:

The PostgreSQL planner/optimizer is responsible to parse and verify the syntax of each query received by the server. If the syntax is correct a parse tree is built up else an error is generated. The log_parser_stats flag controls the inclusion of parser performance statistics in the PostgreSQL logs for each query. 
Configuration Management
6.2.10 Ensure 'log_planner_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'

Description:

The same SQL query can be executed in multiple ways and still produce different results. The PostgreSQL planner/optimizer is responsible to create an optimal execution plan for each query. The log_planner_stats flag controls the inclusion of PostgreSQL planner performance statistics in the PostgreSQL logs for each query. 
Configuration Management
6.2.11 Ensure 'log_executor_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'

Description:

The PostgreSQL executor is responsible to execute the plan handed over by the PostgreSQL planner. The executor processes the plan recursively to extract the required set of rows. The log_executor_stats flag controls the inclusion of PostgreSQL executor performance statistics in the PostgreSQL logs for each query. 
Configuration Management
6.2.12 Ensure 'log_statement_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'

Description:

The log_statement_stats flag controls the inclusion of end to end performance statistics of a SQL query in the PostgreSQL logs for each query. This cannot be enabled with other module statistics (log_parser_stats, log_planner_stats, log_executor_stats). 
Configuration Management
6.2.13 Ensure that the 'log_min_messages' database flag for Cloud SQL PostgreSQL instance is set appropriately

Description:

The log_min_messages flag defines the minimum message severity level that is considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each severity level includes the subsequent levels mentioned above. 

Note: To effectively turn off logging failing statements, set this parameter to PANIC. 

ERROR is considered the best practice setting. Changes should only be made in accordance with the organization's logging policy 
Configuration Management
6.2.14 Ensure 'log_min_error_statement' database flag for Cloud SQL PostgreSQL instance is set to 'Error' or stricter

Description:

The log_min_error_statement flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each severity level includes the subsequent levels mentioned above. Ensure a value of ERROR or stricter is set. 
Configuration Management
6.2.15 Ensure that the 'log_temp_files' database flag for Cloud SQL PostgreSQL instance is set to '0' (on)

Description:

PostgreSQL can create a temporary file for actions such as sorting, hashing and temporary query results when these operations exceed work_mem. The log_temp_files flag controls logging names and the file size when it is deleted. Configuring log_temp_files to 0 causes all temporary file information to be logged, while positive values log only files whose size is greater than or equal to the specified number of kilobytes. A value of -1 disables temporary file information logging. 
Configuration Management
6.2.16 Ensure that the 'log_min_duration_statement' database flag for Cloud SQL PostgreSQL instance is set to '-1' (disabled)

Description:

The log_min_duration_statement flag defines the minimum amount of execution time of a statement in milliseconds where the total duration of the statement is logged. Ensure that log_min_duration_statement is disabled, i.e., a value of -1 is set.
Configuration Management
6.3 Ensure no SQL Databases allow ingress 0.0.0.0/0 (ANY IP)

Description:

Ensure that no SQL Databases allow ingress from 0.0.0.0/0 (ANY IP).
Configuration Management
6.3.1 Ensure 'external scripts enabled' database flag for Cloud SQL SQL Server instance is set to 'off'

Description:

It is recommended to set external scripts enabled database flag for Cloud SQL SQL Server instance to off

external scripts enabled enable the execution of scripts with certain remote language extensions. This property is OFF by default. When Advanced Analytics Services is installed, setup can optionally set this property to true. As the External Scripts Enabled feature allows scripts external to SQL such as files located in an R library to be executed, which could adversely affect the security of the system, hence this should be disabled. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.2 Ensure that the 'cross db ownership chaining' database flag for Cloud SQL Server instance is set to 'off'

Description:

It is recommended to set cross db ownership chaining database flag for Cloud SQL SQL Server instance to off.

Use the cross db ownership for chaining option to configure cross-database ownership chaining for an instance of Microsoft SQL Server. This server option allows you to control cross-database ownership chaining at the database level or to allow cross-database ownership chaining for all databases. Enabling cross db ownership is not recommended unless all of the databases hosted by the instance of SQL Server must participate in crossdatabase ownership chaining and you are aware of the security implications of this setting. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.3 Ensure 'user connections' database flag for Cloud SQL SQL Server instance is set as appropriate

Description:

It is recommended to set user connections database flag for Cloud SQL SQL Server instance according organization-defined value.

The user connections option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server. The actual number of user connections allowed also depends on the version of SQL Server that you are using, and also the limits of your application or applications and hardware. SQL Server allows a maximum of 32,767 user connections. Because user connections is a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable. For example, if only 10 users are logged in, 10 user connection objects are allocated. In most cases, you do not have to change the value for this option. The default is 0, which means that the maximum (32,767) user connections are allowed. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.4 Ensure 'user options' database flag for Cloud SQL SQL Server instance is not configured

Description:

It is recommended that, user options database flag for Cloud SQL SQL Server instance should not be configured.

The user options option specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session. The user options option allows you to change the default values of the SET options (if the server's default settings are not appropriate). 

A user can override these defaults by using the SET statement. You can configure user options dynamically for new logins. After you change the setting of user options, new login sessions use the new setting; current login sessions are not affected. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.5 Ensure 'remote access' database flag for Cloud SQL SQL Server instance is set to 'off'

Description:

It is recommended to set remote access database flag for Cloud SQL SQL Server instance to off.

The remote access option controls the execution of stored procedures from local or remote servers on which instances of SQL Server are running. This default value for this option is 1. This grants permission to run local stored procedures from remote servers or remote stored procedures from the local server. To prevent local stored procedures from being run from a remote server or remote stored procedures from being run on the local server, this must be disabled. The Remote Access option controls the execution of local stored procedures on remote servers or remote stored procedures on local server. 'Remote access' functionality can be abused to launch a Denial-of-Service (DoS) attack on remote servers by off-loading query processing to a target, hence this should be disabled. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.6 Ensure '3625 (trace flag)' database flag for Cloud SQL SQL Server instance is set to 'off'

Description:

It is recommended to set 3625 (trace flag) database flag for Cloud SQL SQL Server instance to off.

Trace flags are frequently used to diagnose performance issues or to debug stored procedures or complex computer systems, but they may also be recommended by Microsoft Support to address behavior that is negatively impacting a specific workload. All documented trace flags and those recommended by Microsoft Support are fully supported in a production environment when used as directed. 3625(trace log) Limits the amount of information returned to users who are not members of the sysadmin fixed server role, by masking the parameters of some error messages using '******'. This can help prevent disclosure of sensitive information, hence this is recommended to disable this flag. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.7 Ensure that the 'contained database authentication' database flag for Cloud SQL server instance is set to 'off'

Description:

It is recommended to set contained database authentication database flag for Cloud SQL on the SQL Server instance is set to off.

A contained database includes all database settings and metadata required to define the database and has no configuration dependencies on the instance of the Database Engine where the database is installed. Users can connect to the database without authenticating a login at the Database Engine level. Isolating the database from the Database Engine makes it possible to easily move the database to another instance of SQL Server. Contained databases have some unique threats that should be understood and mitigated by SQL Server Database Engine administrators. Most of the threats are related to the USER WITH PASSWORD authentication process, which moves the authentication boundary from the Database Engine level to the database level, hence this is recommended to disable this flag. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.4 Ensure that Cloud SQL database instance requires all incoming connections to use SSL

Description:

It is recommended to enforce all incoming connections to SQL database instance to use SSL.

SQL database connections if successfully trapped (MITM); can reveal sensitive data like credentials, database queries, query outputs etc. For security, it is recommended to always use SSL encryption when connecting to your instance. This recommendation is applicable for Postgresql, MySql generation 1, MySql generation 2 and SQL Server 2017 instances. 
Configuration Management
6.5 Database Server should accept connections only from trusted Network(s)/IP(s) and restrict access from the world

Description:

Database Server should accept connections only from trusted Network(s)/IP(s) and restrict access from the world. 
Configuration Management
6.6 Ensure that UDP access is restricted from the internet

Description:

Disable Internet exposed UDP ports on network security groups.
Configuration Management
6.6 Ensure that Cloud SQL database instances do not have public IPs

Description:

It is recommended to configure Second Generation Sql instance to use private IPs instead of public IPs.

To lower the organization's attack surface, Cloud SQL databases should not have public IPs. Private IPs provide improved network security and lower latency for your application. 
Configuration Management
7.1 Ensure that BigQuery datasets are not anonymously or publicly accessible

Description:

It is recommended that the IAM policy on BigQuery datasets does not allow anonymous and/or public access.

Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. Such access might not be desirable if sensitive data is being stored in the dataset. Therefore, ensure that anonymous and/or public access to a dataset is not allowed.
Configuration Management
9.10 Ensure FTP deployments are disabled

Description:

By default, Azure Functions, Web and API Services can be deployed over FTP. If FTP is required for an essential deployment workflow, FTPS should be required for FTP login for all App Service Apps and Functions.
Data Security
2.1 Ensure CMKs are Rotated Annually

Description:

Following cryptographic best practice, we recommend rotating CMKs annually. Enabling automatic key rotation through AWS KMS is the easiest way to ensure this standard is followed. Using the automatic key rotation feature, AWS KMS will automatically generate new cryptographic material for a CMK every year, plus it will save the old cryptographic material. If you prefer, manual key rotation is also available.
Data Security
2.3 Ensure Encryption is Enabled for S3 Buckets

Description:

To enhance the security of S3 buckets, you must ensure encryption is properly enabled. Amazon gives you the ability to set default encryption settings using SSE-S3 or SSE-KMS to automatically encrypt all objects that are stored within an S3 bucket.
Data Security
2.4 Ensure Encryption is Enabled for EBS Volumes

Description:

To enhance the security of your EC2 instances, you must ensure that encryption is enabled for EBS volumes. AWS provides the Amazon EBS encryption feature, which uses KSM to encrypt EBS volumes. Amazon EBS encryption is supported by all volume types.

For more information, visit the AWS documentation on EBS encryption.
Data Security
2.5 Ensure Encryption is Enabled for DB Instances

Description:

To enhance the security of Amazon RDS and data at rest or in underlying storage, you must ensure that encryption is enabled for DB instances. Amazon RDS can encrypt DB instances using the industry standard AES-256 encryption algorithm to encrypt data on the server that hosts DB instances.
Data Security
2.6 Ensure All Load Balancers that Accept HTTPS Traffic Require TLS 1.2

Description:

To enhance the security of your Application Load Balancers (ALBs) and Network Load Balancers (NLBs), you must ensure that all load balancers that accept HTTPS traffic require, at a minimum, TLS 1.2. Older versions of TLS or legacy SSL protocols are known to have fatal security flaws and do not provide protection for data in transit.
Data Security
2.7 Ensure Versioning is Enabled for S3 Buckets

Description:

To support your backup and recovery processes, we recommend enabling versioning in S3. S3 versioning provides a way to keep your objects accurate, up to date, and protected. You can enable bucket versioning on both new buckets and buckets that already exist, and this keeps multiple versions of an object within a bucket.
Data Security
2.8 Ensure S3 Buckets are Not Publicly Available

Description:

A foundation to S3 security is to ensure S3 buckets are not publicly available. Using the Block Public Access feature, you can manage access to buckets, accounts, and access points. By default, new buckets, access points, and objects do not allow public access.
Data Security
2.9 Ensure RDS and DB Instances are Not Publicly Accessible

Description:

To enhance the security of AWS RDS and DB instances, they should not be publicly accessible. Restricting unauthorized access will minimize the risk of compromise to your DB instances. 

Data Security
2.10 Ensure CloudTrail Logs are Encrypted at Rest Using KMS CMKs

Description:

Recommendation 2.7 of the CIS AWS Foundations Benchmark is to ensure CloudTrail logs are encrypted at rest using KMS CMKs . The CIS explains that CloudTrail records AWS API calls for your account and delivers log files to you. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use
SSE-KMS. 
Data Security
3.1 Ensure that 'Secure transfer required' is set to 'Enabled'

Description:

Enable data encryption in transit.

The secure transfer option enhances the security of a storage account by only allowing requests to the storage account by a secure connection. For example, when calling REST APIs to access storage accounts, the connection must use HTTPS. Any requests using HTTP will be rejected when 'secure transfer required' is enabled. When using the Azure files service, connection without encryption will fail, including scenarios using SMB 2.1, SMB 3.0 without encryption, and some flavors of the Linux SMB client. Because Azure storage doesn’t support HTTPS for custom domain names, this option is not applied when using a custom domain name.
Data Security
3.8 Ensure soft delete is enabled for Azure Storage

Description:

The Azure Storage blobs contain data like ePHI, Financial, secret or personal. Erroneously modified or deleted accidentally by an application or other storage account user cause data loss or data unavailability.

It is recommended the Azure Storage be made recoverable by enabling soft delete configuration. This is to save and recover data when blobs or blob snapshots are deleted.
Data Security
3.9 Ensure storage for critical data are encrypted with Customer Managed Key

Description:

Enable sensitive data encryption at rest using Customer Managed Keys rather than Microsoft Managed keys.
Data Security
4.1.2 Ensure that 'Data encryption' is set to 'On' on a SQL Database

Description:

Enable Transparent Data Encryption on every SQL server.
Data Security
4.5 Ensure SQL server's TDE protector is encrypted with Customer-managed key

Description:

TDE with Customer-managed key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties.

With TDE, data is encrypted at rest with a symmetric key (called the database encryption key) stored in the database or data warehouse distribution. To protect this data encryption key (DEK) in the past, only a certificate that the Azure SQL Service managed could be used. Now, with Customer-managed key support for TDE, the DEK can be protected with an asymmetric key that is stored in the Key Vault. Key Vault is a highly available and scalable cloud-based key store which offers central key management, leverages FIPS 140-2 Level 2 validated hardware security modules (HSMs), and allows separation of management of keys and data, for additional security.

Based on business needs or criticality of data/databases hosted a SQL server, it is recommended that the TDE protector is encrypted by a key that is managed by the data owner (Customer-managed key).
Data Security
5.1.4 Ensure the storage account containing the container with activity logs is encrypted with BYOK (Use Your Own Key)

Description:

The storage account with the activity log export container is configured to use BYOK (Use Your Own Key).
Data Security
7.2 Ensure that 'OS and Data' disks are encrypted with CMK

Description:

Ensure that OS disks (boot volumes) and data disks (non-boot volumes) are encrypted with CMK.
Data Security
7.2 Ensure that all BigQuery Tables are encrypted with Customer managed encryption key (CMEK)

Description:

BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. The data is encrypted using the data encryption keys and data encryption keys themselves are further encrypted using key encryption keys. This is seamless and do not require any additional input from the user. However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. If CMEK is used, the CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys. 
Data Security
7.3 Ensure that a Default Customer-managed encryption key (CMEK) is specified for all BigQuery Data Sets

Description:

BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. The data is encrypted using the data encryption keys and data encryption keys themselves are further encrypted using key encryption keys. This is seamless and do not require any additional input from the user. However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. 
Data Security
7.3 Ensure that 'Unattached disks' are encrypted with CMK

Description:

Ensure that unattached disks in a subscription are encrypted with a Customer Managed Key (CMK).
Data Security
8.1 Ensure that the expiration date is set on all key vault keys

Description:

Ensure that all keys in Azure Key Vault have an expiration time set.
Data Security
8.2 Ensure that the expiration date is set on all key vaults secrets

Description:

Ensure that all Secrets in the Azure Key Vault have an expiration time set.
Data Security
8.4 Ensure that key vaults are recoverable

Description:

The key vault contains object keys, secrets and certificates. Accidental unavailability of a key vault can cause immediate data loss or loss of security functions (authentication, validation, verification, non-repudiation, etc.) supported by the key vault objects. \nIt is recommended the key vault be made recoverable by enabling the "Do Not Purge" and "Soft Delete" functions. This is in order to prevent loss of encrypted data including storage accounts, SQL databases, and/or dependent services provided by key vault objects (Keys, Secrets, Certificates) etc., as may happen in the case of accidental deletion by a user or from disruptive activity by a malicious user.
Data Security
9.1 Ensure App Service Authentication is set on Azure App Service

Description:

Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the API app, or authenticate those that have tokens before they reach the API app. If an anonymous request is received from a browser, App Service will redirect to a logon page. To handle the logon process, a choice from a set of identity providers can be made, or a custom authentication mechanism can be implemented.
Data Security
9.2 Ensure web app redirects all HTTP traffic to HTTPS in Azure App Service

Description:

Azure Web Apps allows sites to run under both HTTP and HTTPS by default. Web apps can be accessed by anyone using non-secure HTTP links by default. Non-secure HTTP requests can be restricted and all HTTP requests redirected to the secure HTTPS port. It is recommended to enforce HTTPS-only traffic.
Data Security
9.3 Ensure web app is using the latest version of TLS encryption

Description:

The TLS(Transport Layer Security) protocol secures transmission of data over the internet using standard encryption technology. Encryption should be set with the latest version of TLS. App service allows TLS 1.2 by default, which is the recommended TLS level by industry standards, such as PCI DSS.
Network Monitoring
2.1 Ensure that Cloud Audit Logging is configured properly across all services and al users from a project

Description:

It is recommended that Cloud Audit Logging is configured to track all admin activities and read, write access to user data.
 
 Cloud Audit Logging maintains two audit logs for each project, folder, and organization:
 Admin Activity and Data Access.

  1. Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. Admin Activity audit logs are enabled for all services and cannot be configured.
  2. Data Access audit logs record API calls that create, modify, or read user-provided data. These are disabled by default and should be enabled. There are three kinds of Data Access audit log information:
     o Admin read: Records operations that read metadata or configuration
     information. Admin Activity audit logs record writes of metadata and
     configuration information that cannot be disabled.
     o Data read: Records operations that read user-provided data.
     o Data write: Records operations that write user-provided data.

It is recommended to have an effective default audit config configured in such a way that:

  1. logtype is set to DATA_READ (to log user activity tracking) and DATA_WRITES (to log changes/tampering to user data).
  2. audit config is enabled for all the services supported by the Data Access audit logs feature.

Logs should be captured for all users, i.e., there are no exempted users in any of the audit config sections. This will ensure overriding the audit config will not contradict the requirement 
Network Monitoring
2.2 Ensure that sinks are configured for all Log entries

Description:

It is recommended to create a sink that will export copies of all the log entries. This can help aggregate logs from multiple projects and export them to a Security Information and Event Management (SIEM). 
Network Monitoring
2.3 Ensure that retention policies on log buckets are configured using Bucket Lock

Description:

Enabling retention policies on log buckets will protect logs stored in cloud storage buckets from being overwritten or accidentally deleted. It is recommended to set up retention policies and configure Bucket Lock on all storage buckets that are used as log sinks. 
Network Monitoring
2.4 Ensure log metric filter and alerts exists for Project Ownership assignments/changes

Description:

In order to prevent unnecessary project ownership assignments to users/service-accounts and further misuses of projects and resources, all roles/Owner assignments should be monitored.
 Members (users/Service-Accounts) with a role assignment to primitive role roles/Owner are project owners.
 The project owner has all the privileges on the project the role belongs to. These are summarized below:

  • All viewer permissions on all GCP Services within the project
  • Permissions for actions that modify the state of all GCP services within the project
  • Manage roles and permissions for a project and all resources within the project
  • Set up billing for a project

Granting the owner role to a member (user/Service-Account) will allow that member to modify the Identity and Access Management (IAM) policy. Therefore, grant the owner role only if the member has a legitimate purpose to manage the IAM policy. This is because the project IAM policy contains sensitive access control data. Having a minimal set of users allowed to manage IAM policy will simplify any auditing that may be necessary. 
Network Monitoring
2.5 Ensure log metric filter and alerts exists for Audit Configuration Changes

Description:

Google Cloud Platform (GCP) services write audit log entries to the Admin Activity and Data Access logs to help answer the questions of, "who did what, where, and when?" within GCP projects. 

Cloud audit logging records information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by GCP services. Cloud audit logging provides a history of GCP API calls for an account, including API calls made via the console, SDKs, command-line tools, and other GCP services. 
Network Monitoring
2.6 Ensure log metric filter and alerts exists for Custom Role changes

Description:

Google Cloud IAM provides predefined roles that give granular access to specific Google Cloud Platform resources and prevent unwanted access to other resources. However, to cater to organization-specific needs, Cloud IAM also provides the ability to create custom roles. Project owners and administrators with the Organization Role Administrator role or the IAM Role Administrator role can create custom roles. Monitoring role creation, deletion and updating activities will help in identifying any over-privileged role at early stages. 
Network Monitoring
2.7 Ensure log metric filter and alerts exists for VPC Network Firewall rule changes

Description:

It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) Network Firewall rule changes.

Monitoring for Create or Update Firewall rule events gives insight to network access changes and may reduce the time it takes to detect suspicious activity. 
Network Monitoring
2.8 Ensure log metric filter and alerts exists for VPC network route changes

Description:

It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network route changes.

Google Cloud Platform (GCP) routes define the paths network traffic takes from a VM instance to another destination. The other destination can be inside the organization VPC network (such as another VM) or outside of it. Every route consists of a destination and a next hop. Traffic whose destination IP is within the destination range is sent to the next hop for delivery. 

Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path. 
Network Monitoring
2.9 Ensure log metric filter and alerts exists for VPC network changes

Description:

It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network changes. 

It is possible to have more than one VPC within a project. In addition, it is also possible to create a peer connection between two VPCs enabling network traffic to route between VPCs. 

Monitoring changes to a VPC will help ensure VPC traffic flow is not getting impacted. 
Network Monitoring
2.10 Ensure log metric filter and alerts exists for Cloud Storage IAM permission changes

Description:

It is recommended that a metric filter and alarm be established for Cloud Storage Bucket IAM changes. 

Monitoring changes to cloud storage bucket permissions may reduce the time needed to detect and correct permissions on sensitive cloud storage buckets and objects inside the bucket 
Network Monitoring
2.11 Enable automatic provisioning of the monitoring agent to collect security data.

Description:

Enable automatic provisioning of the monitoring agent to collect security data.
Network Monitoring
2.11 Ensure log metric filter and alerts exists for SQL instance configuration changes

Description:

It is recommended that a metric filter and alarm be established for SQL instance configuration changes.
 
Monitoring changes to SQL instance configuration changes may reduce the time needed to detect and correct misconfigurations done on the SQL server. 
Below are a few of the configurable options which may the impact security posture of an SQL instance:

  • Enable auto backups and high availability: Misconfiguration may adversely impact business continuity, disaster recovery, and high availability
  • Authorize networks: Misconfiguration may increase exposure to untrusted networks 
Network Monitoring
2.14 Ensure that 'Notify about alerts with the following severity' is set to 'High'

Description:

Enables emailing security alerts to the subscription owner or other designated security contact.
Network Monitoring
3.8 Ensure VPC Flow logs is enabled for every subnet in VPC Network

Description:

Flow Logs is a feature that enables users to capture information about the IP traffic going to and from network interfaces in the organization's VPC Subnets. Once a flow log is created, the user can view and retrieve its data in Stackdriver Logging. It is recommended that Flow Logs be enabled for every business-critical VPC subnet. 
Network Monitoring
4.1 Ensure Insecure Ports are Not In Use

Description:

It is best practice not to use insecure ports and protocols such as FTP, Telnet, and SNMP. Using insecure ports and protocols makes it possible for your traffic to be sent unencrypted over the Internet. To ensure protection, you can utilize the inbound rules for security groups. Security groups act as a firewall in your AWS environment and their inbound rules will show you areas of vulnerability in terms of insecure ports and protocols.

For more information, visit the AWS documentation on security group rules.
Network Monitoring
4.1.1 Ensure that 'Auditing' is set to 'On'

Description:

Enable auditing on SQL Servers.
Network Monitoring
4.1.3 Ensure that 'Auditing' Retention is 'greater than 90 days'

Description:

SQL Server Audit Retention should be configured to be greater than 90 days.
Network Monitoring
4.2 Ensure EC2 Instances are Not Directly Connected to the Internet

Description:

To build a strong perimeter, you must ensure that EC2 instances are not directly connected to the Internet. You can deploy EC2 instances in two ways: on an internal network or assigning them a public IP address. For security purposes, you want to ensure that when possible, your EC2 instances live on an internal network and you do not allow direct Internet access to them.

For more information, visit the AWS documentation on networking in Amazon EC2.
Network Monitoring
4.3 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to Port 3389

Description:

Recommendation 4.2 of the CIS AWS Foundations Benchmark is to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to port 3389. The CIS explains how security groups provide stateful filtering of ingress/egress network traffic to AWS resources. By removing unfettered connectivity to remote console services, such as RDP, this reduces a server's exposure to risk.
Network Monitoring
4.4 Ensure All Outbound Traffic is Routed Through a NAT Gateway

Description:

Just because you create a NAT gateway does not mean that EC2 instances have the ability to run outbound traffic through the Internet. You need to define a route that allows EC2 instances on a private subnet to talk to NAT gateways and then go out to the Internet. This is where route tables comes into play. AWS defines a route table as a set of rules (called routes) that are used to determine where network traffic from your subnet or gateway is directed. When you save your route, you’ve created a route for instances in that Availability Zone to be able to route outbound traffic to the Internet.

For more information, visit the AWS documentation on NAT gateways

Network Monitoring
4.5 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to Port 22

Description:

Recommendation 4.1 of the CIS AWS Foundations Benchmark is to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to port 22. The CIS explains how security groups provide stateful filtering of ingress/egress network traffic to AWS resources. By removing unfettered connectivity to remote console services, such as SSH, this reduces a server's exposure to risk.
Network Monitoring
4.6 Ensure RDS Instances are Only Accessible by Internal IPs

Description:

It is a security best practice to ensure that the Amazon Relational Database Service (Amazon RDS) instances are not publicly accessible. This could lead to potential data loss as you are giving direct access to your database when it is exposed to the internet.
Network Monitoring
4.7 Ensure that No More than 1 Host Per Account is Allowed External Access to Port 22

Description:

To enhance your security perimeter, you must ensure that you do not have multiple hosts with external access to administrative ports, like port 22, and that you implement the principle of least privilege around administrative ports. You can use Session Manager feature in AWS Systems Manager or bastion hosts to ensure that no more than 1 host per account is allowed external access to port 22.

Network Monitoring
4.8 Ensure Backups are Enabled for RDS

Description:

For business continuity and disaster recovery purposes, it’s extremely important to have the maintenance and backup settings in your Amazon RDS correctly configured. If configured properly, your RDS can easily manage your backups, patching, failure detection, and more.

For more information, visit the AWS documentation on working with RDS backups.
Network Monitoring
4.9 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to All Ports

Description:

If your security groups allow unrestricted traffic (0.0.0.0/0 or ::/0) to all ports, this invalidates everything the security groups are working to accomplish as a virtual firewall. You must ensure that your security groups do not allow unrestricted traffic to all ports. 

Network Monitoring
4.10 Ensure Default VPC Security Groups Restrict Traffic

Description:

Recommendation 4.3 of the CIS AWS Foundations Benchmark is to ensure that the default security group of every VPC restricts all traffic. This CIS explains that a VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Configuring all VPC default security groups to restrict all traffic will encourage least privilege security group development and mindful placement of AWS resources into security groups which will in-turn reduce the exposure of those resources.

For more information, visit the AWS documentation on default security groups.
Network Monitoring
4.11 Ensure WAF Rules are Logged

Description:

Best practice for AWS WAF rules is to maintain logging in order to gain details about the traffic that is analyzed by your web ACLs. AWS explains that the information contained in the logs includes the time that AWS WAF received the request from your AWS resource, detailed information about the request, and the action for the rule that each request matched. You send logs from your web ACL to an Amazon Kinesis Data Firehose with a configured storage destination. After you enable logging, AWS WAF delivers logs to your storage destination through the HTTPS endpoint of Kinesis Data Firehose.
Network Monitoring
4.12 Ensure Internet-Facing ALBs Have WAF ACLs Attached

Description:

Internet-facing ALBs need to have WAF ACLs attached. WAF, as part of your security posture, will sit in front of your ALBs and provide a web ACL to block malicious traffic on the load balancer side. To implement this best practice, you must understand which load balancers are attached to the WAF through we ACLs.
Network Monitoring
4.13 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to Oracle Ports 1521 and 2483

Description:

It is best practice to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to Oracle ports 1521 and 2483. These are common ports that should not have unrestricted connectivity to your environment. By restricting access through the inbound rules for security groups, you will reduce risk exposure.
Network Monitoring
4.15 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to MySQL Port 3306

Description:

It is best practice to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to MySQL port 3306. This is a common port that should not have unrestricted connectivity to your environment. By restricting access through the inbound rules for security groups, you will reduce risk exposure.
Network Monitoring
4.16 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to PostgreSQL Port 5432

Description:

It is best practice to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to PostgreSQL port 5432. This is a common port that should not have unrestricted connectivity to your environment. By restricting access through the inbound rules for security groups, you will reduce risk exposure.
Network Monitoring
4.17 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to Redis Port 6379

Description:

It is best practice to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to Redis port 6379. This is a common port that should not have unrestricted connectivity to your environment. By restricting access through the inbound rules for security groups, you will reduce risk exposure.
Network Monitoring
4.18 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to MongoDB Ports 27017 and 27018

Description:

It is best practice to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to MongoDB ports 27017 and 27018. These are common ports that should not have unrestricted connectivity to your environment. By restricting access through the inbound rules for security groups, you will reduce risk exposure.
Network Monitoring
4.20 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to Cassandra Ports 7199, 8888, and 9160

Description:

It is best practice to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to Cassandra ports 7199, 8888, and 9160. These are common ports that should not have unrestricted connectivity to your environment. By restricting access through the inbound rules for security groups, you will reduce risk exposure.
Network Monitoring
4.23 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to Memcached Port 11211

Description:

It is best practice to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to Memcached port 11211. This is a common port that should not have unrestricted connectivity to your environment. By restricting access through the inbound rules for security groups, you will reduce risk exposure.
Network Monitoring
4.24 Ensure Security Groups Do Not Allow Ingress from 0.0.0.0/0 or ::/0 to Microsoft SQL Server Port 1433

Description:

It is best practice to ensure that security groups do not allow ingress from 0.0.0.0/0 or ::/0 to Microsoft SQL Server port 1433. This is a common port that should not have unrestricted connectivity to your environment. By restricting access through the inbound rules for security groups, you will reduce risk exposure.
Network Monitoring
4.25 Ensure API Gateways Have WAF ACLs Attached

Description:

By using AWS WAF in combination with web ACLs, you can control control how a variety of services (including API Gateways) respond to web requests, as well as inspect web requests to see if they match your criteria. The Amazon API Gateway service is used to create, publish, maintain, monitoring, and secure REST, HTTP, and WebSocket APIs.

For more information, visit the AWS documentation on Amazon API Gateway.
Network Monitoring
5.1.1 Ensure that a 'Diagnostics Setting' exists

Description:

Enable Diagnostic settings for exporting activity logs. Diagnostic setting are available for each individual resources within a subscription. Settings should be configured for all appropriate resources for your environment.
Network Monitoring
5.1.2 Ensure Diagnostic Setting captures appropriate categories

Description:

The diagnostic setting should be configured to log the appropriate activities from the control/management plane.
Network Monitoring
5.1.5 Ensure that logging for Azure Key Vault is 'Enabled'

Description:

Enable AuditEvent logging for key vault instances to ensure interactions with key vaults are logged and available.
Network Monitoring
5.2.1 Ensure that Activity Log Alert exists for Create Policy Assignment

Description:

Create an activity log alert for the Create Policy Assignment event.
Network Monitoring
5.2.2 Ensure that Activity Log Alert exists for Delete Policy Assignment

Description:

Create an activity log alert for the Delete Policy Assignment event.
Network Monitoring
5.2.3 Ensure that Activity Log Alert exists for Creating or Updating a Network Security Group

Description:

Create an Activity Log Alert for the "Create" or "Update Network Security Group" event.
Network Monitoring
5.2.4 Ensure that Activity Log Alert exists for Delete Network Security Group

Description:

Create an activity log alert for the Delete Network Security Group event.
Network Monitoring
5.2.5 Ensure that Activity Log Alert exists for Create or Update Network Security Group Rule

Description:

Create an activity log alert for the Create or Update Network Security Group Rule event.
Network Monitoring
5.2.6 Ensure that Activity Log alert exists for the Delete Network Security Group Rule

Description:

Create an activity log alert for the Delete Network Security Group Rule event.
Network Monitoring
5.2.7 Ensure that Activity Log Alert exists for Create or Update Security Solution

Description:

Create an activity log alert for the Create or Update Security Solution event.
Network Monitoring
5.2.8 Ensure that Activity Log Alert exists for Delete Security Solution

Description:

Create an activity log alert for the Delete Security Solution event.
Network Monitoring
5.2.9 Ensure that Activity Log Alert exists for Create or Update or Delete SQL Server Firewall Rule

Description:

Create an activity log alert for the Create or Update or Delete SQL Server Firewall Rule event.
Network Monitoring
5.3 Ensure that Diagnostic Logs are enabled for all services which support it

Description:

Diagnostic Logs capture activity to the data access plane while the Activity log is a subscription-level log for the control plane. Resource-level diagnostic logs provide insight into operations that were performed within that resource itself, for example, getting a secret from a Key Vault. Currently, 32 Azure resources support Diagnostic Logging (See the references section for a complete list), including Network Security Groups, Load Balancers, Key Vault, AD, Logic Apps and CosmosDB. The content of these logs varies by resource type. For example, Windows event system logs are a category of diagnostics logs for VMs, and blob, table, and queue logs are categories of diagnostics logs for storage accounts.

A number of back-end services were not configured to log and store Diagnostic Logs for certain activities or for a sufficient length. It is crucial that logging systems are correctly configured to log all relevant activities and retain those logs for a sufficient length of time. By default, Diagnostic Logs are not enabled. Given that the mean time to detection in an enterprise is 240 days, a minimum retention period of two years is recommended.

Note: The CIS Benchmark covers some specific Diagnostic Logs separately.

<3.3 - Ensure Storage logging is enabled for Queue service for read, write, and delete requests>
<6.4 Ensure that Network Security Group Flow Log retention period is 'greater than 90 days'>
Network Monitoring
6.4 Ensure that Network Security Group Flow Log retention period is 'greater than 90 days'

Description:

Network Security Group Flow Logs should be enabled and the retention period is set to greater than or equal to 90 days.
Incident Response
2.13 Ensure 'Additional email addresses' is configured with a security contact email

Description:

Security Center emails the subscription owners whenever a high-severity alert is triggered for their subscription. You should provide a security contact email address as an additional email address.
Incident Response
5.1 Ensure CloudTrail is Enabled in All Regions

Description:

Recommendation 2.1 of the CIS AWS Foundations Benchmark is to ensure CloudTrail is enabled in all regions. The CIS explains that CloudTrail records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services. The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

For more information, visit the AWS documentation on best practices in CloudTrail.
Incident Response
5.2 Ensure CloudTrail Log File Integrity Validation is Enabled

Description:

Recommendation 2.2 of the CIS AWS Foundations Benchmark states that enabling log file validation will provide additional integrity checking of CloudTrail logs. CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log.
Incident Response
5.3 Ensure the S3 Bucket Used to Store CloudTrail Logs is Not Publicly Accessible

Description:

Recommendation 2.3 of the CIS AWS Foundations Benchmark states that, by using bucket policies or ACLs, S3 buckets used to store CloudTrail logs should prevent public access. Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected account's use or configuration.
Incident Response
5.4 Ensure CloudTrail is Integrated with CloudWatch

Description:

Recommendation 2.4 of the CIS AWS Foundations Benchmark states that CloudTrail logs need to be integrated with CloudWatch logs. Sending CloudTrail logs to CloudWatch Logs will facilitate real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity.
Incident Response
5.5 Ensure AWS Config is Enabled in All Regions

Description:

Recommendation 2.5 of the CIS AWS Foundations Benchmark is to ensure AWS Config is enabled in all regions. The CIS explains that AWS Config performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item, relationships between configuration items, and any configuration changes between resources. The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing.
Incident Response
5.6 Ensure Logging is Enabled for Access to CloudTrail S3 Bucket

Description:

Recommendation 2.6 of the CIS AWS Foundations Benchmark is to ensure CloudTrail S3 bucket logging is enabled, which generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. By enabling S3 bucket logging on target S3 buckets, it is possible to capture all events which may affect objects within a target bucket. Configuring logs to be placed in a separate bucket allows access to log information which can be useful in security and incident response workflows.
Incident Response
5.7 Ensure VPC Flow Logs are Enabled in All VPCs

Description:

Recommendation 2.9 of the CIS AWS Foundations Benchmark states that in order to capture information about the IP traffic going to and from network interfaces in your VPC, you must enable VPC flow logging in all VPCs. VPC Flow Logs provide visibility into network traffic that traverses the VPC and can be used to detect anomalous traffic or insight during security workflows.

For more information, visit the AWS documentation on VPC flow logs.
Vulnerability Management
3.1 Ensure EC2 Instances are Managed by Systems Manager

Description:

To strengthen your security posture, we recommend configuring EC2 instances as managed instances under AWS Systems Manager. AWS Config can use the ec2-instance-managed-by-systems-manager rule to check whether the EC2 instances in your account are managed by Systems Manager.
Vulnerability Management
3.3 Ensure a VPC Endpoint is Used to Access Systems Manager

Description:

AWS explains that you can improve the security posture of your managed instances by configuring AWS Systems Manager to use an interface VPC endpoint. An interface VPC endpoint enables you to connect to services powered by AWS PrivateLink, a technology that enables you to privately access Amazon EC2 and Systems Manager APIs by using private IP addresses. This means that your managed instances don't have access to the Internet.

For more information, visit the AWS documentation on creating VCP endpoints.
Vulnerability Management
3.4 Ensure All EC2 Instances Managed by Systems Manager are Compliant with Patch Baseline

Description:

Patch management is an integral component of vulnerability management. To ensure your EC2 instances are compliant with patching standards, you must use AWS Systems Manager to apply patch baselines to instances. To associate a specific patch baseline with your instances, you will add EC2 instances to a patch group, then adding a patch group to a patch baseline.

For more information, visit the AWS documentation on creating a patch grou
Vulnerability Management
4.2.2 Ensure that Vulnerability Assessments on an SQL server is configured

Description:

Enable Vulnerability Assessment (VA) service scans for critical SQL servers and corresponding SQL databases.
Vulnerability Management
4.2.3 Ensure that VA setting Periodic Recurring Scans is enabled on a SQL server

Description:

Enable Vulnerability Assessment (VA) Periodic recurring scans for critical SQL servers and corresponding SQL databases.
Vulnerability Management
4.2.4 Ensure that VA setting Send Scan Reports is configured for a SQL server

Description:

Configure 'Send scan reports to' with email ids of concerned data owners/stakeholders for a critical SQL servers.
Vulnerability Management
4.2.5 Ensure that VA setting 'Also send email notifications to admins and subscription owners' is set for an SQL server

Description:

Enable Vulnerability Assessment (VA) setting 'Also send email notifications to admins and subscription owners'.
Backup and Restoration
6.7 Ensure that Cloud SQL database instances are configured with automated backups

Description:

It is recommended to have all SQL database instances set to enable automated backups.

Backups provide a way to restore a Cloud SQL instance to recover lost data or recover from a problem with that instance. Automated backups need to be set for any instance that contains data that should be protected from loss or damage. This recommendation is applicable for SQL Server, PostgreSql, MySql generation 1 and MySql generation 2 instances.