Checks We Perform

Below is a list of all the checks we perform. Select one to view more information.
Logical Access
1.1 Ensure that corporate login credentials are used

Description:

It is recommended fully-managed corporate Google accounts be used for increased visibility, auditing, and controlling access to Cloud Platform resources. Email accounts based outside of the user's organization, such as personal accounts, should not be used for business purposes.
Logical Access
1.2 Ensure that multi-factor authentication is enabled for all non-service accounts

Description:

Setup multi-factor authentication for Google Cloud Platform accounts.

Multi-factor authentication requires more than one mechanism to authenticate a user. This secures user logins from attackers exploiting stolen or weak credentials. 
Logical Access
1.3 Ensure that Security Key Enforcement is enabled for all admin accounts

Description:

Setup Security Key Enforcement for Google Cloud Platform admin accounts. 

Google Cloud Platform users with Organization Administrator roles have the highest level of privilege in the organization. These accounts should be protected with the strongest form of two-factor authentication: Security Key Enforcement. Ensure that admins use Security Keys to log in instead of weaker second factors like SMS or one-time passwords (OTP). Security Keys are actual physical keys used to access Google Organization Administrator Accounts. They send an encrypted signature rather than a code, ensuring that logins cannot be phished. 
Logical Access
1.4 Ensure that there are only GCP-managed service account keys for each service account

Description:

User managed service accounts should not have user-managed keys.
 
 Anyone who has access to the keys will be able to access resources through the service account. GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis. User-managed keys are created, downloadable, and managed by users. They expire 10 years from creation.
 For user-managed keys, the user has to take ownership of key management activities which include:
 - Key storage
 - Key distribution
 - Key revocation
 - Key rotation
 - Protecting the keys from unauthorized users
 - Key recovery
 Even with key owner precautions, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels.
 It is recommended to prevent user-managed service account keys 
Logical Access
1.5 Ensure that ServiceAccount has no Admin privileges.

Description:

Service accounts represent service-level security of the Resources (application or a VM) which can be determined by the roles assigned to it. Enrolling ServiceAccount with Admin rights gives full access to an assigned application or a VM. A ServiceAccount Access holder can perform critical actions like delete, update change settings, etc. without user intervention. For this reason, it's recommended that service accounts not have Admin rights. 
Logical Access
1.6 Ensure that IAM users are not assigned Service Account User role at project level

Description:

It is recommended to assign the Service Account User (iam.serviceAccountUser) and Service Account Token Creator (iam.serviceAccountTokenCreator) roles to a user for a specific service account rather than assigning the role to a user at project level. 
Logical Access
1.7 Ensure user-managed/external keys for service accounts are rotated every 90 days or less

Description:

Service Account keys consist of a key ID (Private_key_Id) and Private key, which are used to sign programmatic requests users make to Google cloud services accessible to that particular service account. It is recommended that all Service Account keys are regularly rotated. 
Logical Access
1.8 Ensure that Separation of duties is enforced while assigning service account related roles to users

Description:

It is recommended that the principle of 'Separation of Duties' is enforced while assigning service-account related roles to users.
 
The built-in/predefined IAM role Service Account admin allows the user/identity to create, delete, and manage service account(s). The built-in/predefined IAM role Service Account User allows the user/identity (with adequate privileges on Compute and App Engine) to assign service account(s) to Apps/Compute Instances.

Separation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action. In Cloud IAM - service accounts, this could be an action such as using a service account to access resources that user should not normally have access to.

Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice.

No user should have Service Account Admin and Service Account User roles assigned at the same time. 
Logical Access
1.9 Ensure that Cloud KMS cryptokeys are not anonymously or publicly accessible

Description:

It is recommended that the IAM policy on Cloud KMS cryptokeys should restrict anonymous and/or public access.

Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. Such access might not be desirable if sensitive data is stored at the location. In this case, ensure that anonymous and/or public access to a Cloud KMS cryptokey is not allowed. 
Logical Access
1.10 Ensure Encryption keys are rotated within a period of 90 days

Description:

Google Cloud Key Management Service stores cryptographic keys in a hierarchical structure designed for useful and elegant access control management. 

The format for the rotation schedule depends on the client library that is used. For the gcloud command-line tool, the next rotation time must be in ISO or RFC3339 format, and the rotation period must be in the form INTEGER[UNIT], where units can be one of seconds (s), minutes (m), hours (h) or days (d). 
Logical Access
1.11 Ensure that Separation of duties is enforced while assigning KMS related roles to users

Description:

It is recommended that the principle of 'Separation of Duties' is enforced while assigning KMS related roles to users. 

The built-in/predefined IAM role Cloud KMS Admin allows the user/identity to create, delete, and manage service account(s). The built-in/predefined IAM role Cloud KMS CryptoKey Encrypter/Decrypter allows the user/identity (with adequate privileges on concerned resources) to encrypt and decrypt data at rest using an encryption key(s). 

The built-in/predefined IAM role Cloud KMS CryptoKey Encrypter allows the user/identity (with adequate privileges on concerned resources) to encrypt data at rest using an encryption key(s). The built-in/predefined IAM role Cloud KMS CryptoKey Decrypter allows the user/identity (with adequate privileges on concerned resources) to decrypt data at rest using an encryption key(s). 

Separation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action. In Cloud KMS, this could be an action such as using a key to access and decrypt data a user should not normally have access to. Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice. 

No user(s) should have Cloud KMS Admin and any of the Cloud KMS CryptoKey Encrypter/Decrypter, Cloud KMS CryptoKey Encrypter, Cloud KMS CryptoKey Decrypter roles assigned at the same time. 
Logical Access
1.12 Ensure API keys are not created for a project

Description:

Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to use standard authentication flow instead.
 
 Security risks involved in using API-Keys appear below:
 - API keys are simple encrypted strings
 - API keys do not identify the user or the application making the API request
 - API keys are typically accessible to clients, making it easy to discover and steal an API key
 To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. 
Logical Access
1.13 Ensure API keys are restricted to use by only specified Hosts and Apps

Description:

Unrestricted keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API key usage to trusted hosts, HTTP referrers and apps.
 
 Security risks involved in using API-Keys appear below:
 - API keys are simple encrypted strings
 - API keys do not identify the user or the application making the API request
 - API keys are typically accessible to clients, making it easy to discover and steal an API key
 In light of these potential risks, Google recommends using the standard authentication flow instead of API keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to
 authenticate to that API.
 In order to reduce attack vectors, API-Keys can be restricted only to trusted hosts, HTTP referrers and applications. 
Logical Access
1.14 Ensure API keys are restricted to only APIs that application needs access

Description:

API keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API keys to use (call) only APIs required by an application.
 
 Security risks involved in using API-Keys are below:
 - API keys are simple encrypted strings
 - API keys do not identify the user or the application making the API request
 - API keys are typically accessible to clients, making it easy to discover and steal an API key
 In light of these potential risks, Google recommends using the standard authentication flow instead of API-Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
 In order to reduce attack surfaces by providing least privileges, API-Keys can be restricted to use (call) only APIs required by an application. 
Logical Access
1.15 Ensure API keys are rotated every 90 days

Description:

It is recommended to rotate API keys every 90 days.
 
 Security risks involved in using API-Keys are listed below:
 - API keys are simple encrypted strings
 - API keys do not identify the user or the application making the API request
 - API keys are typically accessible to clients, making it easy to discover and steal an API key
 Because of these potential risks, Google recommends using the standard authentication flow instead of API Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
 Once a key is stolen, it has no expiration, meaning it may be used indefinitely unless the project owner revokes or regenerates the key. Rotating API keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.
 API keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen. 
Logical Access
4.1 Ensure that instances are not configured to use the default compute engine service account with full access to all Cloud APIs

Description:

It is recommended to configure your instance to not use the default Compute Engine service account because it has the Editor role on the project. 
Logical Access
4.2 Ensure that instances are not configured to use the default service account with full access to all Cloud APIs

Description:

To support principle of least privileges and prevent potential privilege escalation it is recommended that instances are not assigned to default service account Compute Engine default service account with Scope Allow full access to all Cloud APIs. 
Logical Access
6.1.1 Ensure that MySql database instance does not allow anyone to connect with administrative privileges.

Description:

It is recommended to set a password for the administrative user (root by default) to prevent unauthorized access to the SQL database instances. 

This recommendation is applicable only for MySQL Instances. PostgreSQL does not offer any setting for No Password from the cloud console. 
Network Monitoring
2.1 Ensure that Cloud Audit Logging is configured properly across all services and al users from a project

Description:

It is recommended that Cloud Audit Logging is configured to track all admin activities and read, write access to user data.
 
 Cloud Audit Logging maintains two audit logs for each project, folder, and organization:
 Admin Activity and Data Access.

  1. Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. Admin Activity audit logs are enabled for all services and cannot be configured.
  2. Data Access audit logs record API calls that create, modify, or read user-provided data. These are disabled by default and should be enabled. There are three kinds of Data Access audit log information:
     o Admin read: Records operations that read metadata or configuration
     information. Admin Activity audit logs record writes of metadata and
     configuration information that cannot be disabled.
     o Data read: Records operations that read user-provided data.
     o Data write: Records operations that write user-provided data.

It is recommended to have an effective default audit config configured in such a way that:

  1. logtype is set to DATA_READ (to log user activity tracking) and DATA_WRITES (to log changes/tampering to user data).
  2. audit config is enabled for all the services supported by the Data Access audit logs feature.

Logs should be captured for all users, i.e., there are no exempted users in any of the audit config sections. This will ensure overriding the audit config will not contradict the requirement 
Network Monitoring
2.2 Ensure that sinks are configured for all Log entries

Description:

It is recommended to create a sink that will export copies of all the log entries. This can help aggregate logs from multiple projects and export them to a Security Information and Event Management (SIEM). 
Network Monitoring
2.3 Ensure that retention policies on log buckets are configured using Bucket Lock

Description:

Enabling retention policies on log buckets will protect logs stored in cloud storage buckets from being overwritten or accidentally deleted. It is recommended to set up retention policies and configure Bucket Lock on all storage buckets that are used as log sinks. 
Network Monitoring
2.4 Ensure log metric filter and alerts exists for Project Ownership assignments/changes

Description:

In order to prevent unnecessary project ownership assignments to users/service-accounts and further misuses of projects and resources, all roles/Owner assignments should be monitored.
 Members (users/Service-Accounts) with a role assignment to primitive role roles/Owner are project owners.
 The project owner has all the privileges on the project the role belongs to. These are summarized below:

  • All viewer permissions on all GCP Services within the project
  • Permissions for actions that modify the state of all GCP services within the project
  • Manage roles and permissions for a project and all resources within the project
  • Set up billing for a project

Granting the owner role to a member (user/Service-Account) will allow that member to modify the Identity and Access Management (IAM) policy. Therefore, grant the owner role only if the member has a legitimate purpose to manage the IAM policy. This is because the project IAM policy contains sensitive access control data. Having a minimal set of users allowed to manage IAM policy will simplify any auditing that may be necessary. 
Network Monitoring
2.5 Ensure log metric filter and alerts exists for Audit Configuration Changes

Description:

Google Cloud Platform (GCP) services write audit log entries to the Admin Activity and Data Access logs to help answer the questions of, "who did what, where, and when?" within GCP projects. 

Cloud audit logging records information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by GCP services. Cloud audit logging provides a history of GCP API calls for an account, including API calls made via the console, SDKs, command-line tools, and other GCP services. 
Network Monitoring
2.6 Ensure log metric filter and alerts exists for Custom Role changes

Description:

Google Cloud IAM provides predefined roles that give granular access to specific Google Cloud Platform resources and prevent unwanted access to other resources. However, to cater to organization-specific needs, Cloud IAM also provides the ability to create custom roles. Project owners and administrators with the Organization Role Administrator role or the IAM Role Administrator role can create custom roles. Monitoring role creation, deletion and updating activities will help in identifying any over-privileged role at early stages. 
Network Monitoring
2.7 Ensure log metric filter and alerts exists for VPC Network Firewall rule changes

Description:

It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) Network Firewall rule changes.

Monitoring for Create or Update Firewall rule events gives insight to network access changes and may reduce the time it takes to detect suspicious activity. 
Network Monitoring
2.8 Ensure log metric filter and alerts exists for VPC network route changes

Description:

It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network route changes.

Google Cloud Platform (GCP) routes define the paths network traffic takes from a VM instance to another destination. The other destination can be inside the organization VPC network (such as another VM) or outside of it. Every route consists of a destination and a next hop. Traffic whose destination IP is within the destination range is sent to the next hop for delivery. 

Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path. 
Network Monitoring
2.9 Ensure log metric filter and alerts exists for VPC network changes

Description:

It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network changes. 

It is possible to have more than one VPC within a project. In addition, it is also possible to create a peer connection between two VPCs enabling network traffic to route between VPCs. 

Monitoring changes to a VPC will help ensure VPC traffic flow is not getting impacted. 
Network Monitoring
2.10 Ensure log metric filter and alerts exists for Cloud Storage IAM permission changes

Description:

It is recommended that a metric filter and alarm be established for Cloud Storage Bucket IAM changes. 

Monitoring changes to cloud storage bucket permissions may reduce the time needed to detect and correct permissions on sensitive cloud storage buckets and objects inside the bucket 
Network Monitoring
2.11 Ensure log metric filter and alerts exists for SQL instance configuration changes

Description:

It is recommended that a metric filter and alarm be established for SQL instance configuration changes.
 
Monitoring changes to SQL instance configuration changes may reduce the time needed to detect and correct misconfigurations done on the SQL server. 
Below are a few of the configurable options which may the impact security posture of an SQL instance:

  • Enable auto backups and high availability: Misconfiguration may adversely impact business continuity, disaster recovery, and high availability
  • Authorize networks: Misconfiguration may increase exposure to untrusted networks 
Network Monitoring
3.8 Ensure VPC Flow logs is enabled for every subnet in VPC Network

Description:

Flow Logs is a feature that enables users to capture information about the IP traffic going to and from network interfaces in the organization's VPC Subnets. Once a flow log is created, the user can view and retrieve its data in Stackdriver Logging. It is recommended that Flow Logs be enabled for every business-critical VPC subnet. 
Configuration Management
3.1 Ensure the default network does not exist in a project

Description:

To prevent use of default network, a project should not have a default network. 
Configuration Management
3.2 Ensure legacy networks does not exists for a project

Description:

In order to prevent use of legacy networks, a project should not have a legacy network configured. 

Legacy networks have a single network IPv4 prefix range and a single gateway IP address for the whole network. The network is global in scope and spans all cloud regions. Subnetworks cannot be created in a legacy network and are unable to switch from legacy to auto or custom subnet networks. Legacy networks can have an impact for high network traffic projects and are subject to a single point of contention or failure. 
Configuration Management
3.3 Ensure that DNSSEC is enabled for Cloud DNS

Description:

Cloud Domain Name System (DNS) is a fast, reliable and cost-effective domain name system that powers millions of domains on the internet. Domain Name System Security Extensions (DNSSEC) in Cloud DNS enables domain owners to take easy steps to protect their domains against DNS hijacking and man-in-the-middle and other attacks.
Configuration Management
3.4 Ensure that RSASHA1 is not used for key-signing key in Cloud DNS DNSSEC

Description:

DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong. 
Configuration Management
3.5 Ensure that RSASHA1 is not used for zone-signing key in Cloud DNS DNSSEC

Description:

DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong. 
Configuration Management
3.6 Ensure that SSH access is restricted from the internet

Description:

GCP Firewall Rules are specific to a VPC Network. Each rule either allows or denies traffic when its conditions are met. Its conditions allow the user to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. 

Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, only an IPv4 address or IPv4 block in CIDR notation can be used. Generic (0.0.0.0/0) incoming traffic from the internet to VPC or VM instance using SSH on Port 22 can be avoided. 
Configuration Management
3.7 Ensure that RDP access is restricted from the internet

Description:

GCP Firewall Rules are specific to a VPC Network. Each rule either allows or denies traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. 

Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an IPv4 address or IPv4 block in CIDR notation can be used. Generic (0.0.0.0/0) incoming traffic from the Internet to a VPC or VM instance using RDP on Port 3389 can be avoided. 
Configuration Management
3.9 Ensure no HTTPS or SSL proxy load balancers permit SSL policies with weak cipher suites

Description:

Secure Sockets Layer (SSL) policies determine what port Transport Layer Security (TLS) features clients are permitted to use when connecting to load balancers. To prevent usage of insecure features, SSL policies should use (a) at least TLS 1.2 with the MODERN profile; or (b) the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or (3) a CUSTOM profile that does not support any of the following features: TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA 
Configuration Management
4.3 Ensure 'Block Project-wide SSH keys' enabled for VM instances

Description:

It is recommended to use Instance specific SSH key(s) instead of using common/shared project-wide SSH key(s) to access Instances.
 
Project-wide SSH keys are stored in Compute/Project-meta-data. Project wide SSH keys can be used to login into all the instances within project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within project. It is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised. 
Configuration Management
4.4 Ensure oslogin is enabled for a Project

Description:

Enabling OS login binds SSH certificates to IAM users and facilitates effective SSH certificate management. 
Configuration Management
4.5 Ensure 'Enable connecting to serial ports' is not enabled for VM Instance

Description:

Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support. 

If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore interactive serial console support should be disabled. 
Configuration Management
4.6 Ensure that IP forwarding is not enabled on Instances

Description:

Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets. 

Forwarding of data packets should be disabled to prevent data loss or information disclosure. 
Configuration Management
4.7 Ensure VM disks for critical VMs are encrypted with CustomerSupplied Encryption Keys (CSEK)

Description:

Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. If you supply your own encryption keys, Google uses your key to protect the Google-generated keys used to encrypt and decrypt your data. By default, Google Compute Engine encrypts all data at rest. Compute Engine handles and manages this encryption for you without any additional actions on your part. However, if you wanted to control and manage this encryption yourself, you can provide your own encryption keys. 
Configuration Management
4.8 Ensure Compute instances are launched with Shielded VM enabled

Description:

To defend against advanced threats and ensure that the boot loader and firmware on your VMs are signed and untampered, it is recommended that Compute instances are launched with Shielded VM enabled. 
Configuration Management
5.1 Ensure that Cloud Storage bucket is not anonymously or publicly accessible

Description:

It is recommended that IAM policy on Cloud Storage bucket does not allows anonymous or public access. 

Allowing anonymous or public access grants permissions to anyone to access bucket content. Such access might not be desired if you are storing any sensitive data. Hence, ensure that anonymous or public access to a bucket is not allowed. 
Configuration Management
5.2 Ensure that Cloud Storage buckets have uniform bucket-level access enabled

Description:

It is recommended to use uniform bucket-level access to unify and simplify how you grant access to your Cloud Storage resources. 

Cloud Storage offers two systems for granting users permission to access your buckets and objects: Cloud Identity and Access Management (Cloud IAM) and Access Control Lists (ACLs). These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission. Cloud IAM is used throughout Google Cloud and allows you to grant a variety of permissions at the bucket and project levels. ACLs are used only by Cloud Storage and have limited permission options, but they allow you to grant permissions on a per-object basis. 

In order to support a uniform permissioning system, Cloud Storage has uniform bucketlevel access. Using this feature disables ACLs for all Cloud Storage resources: access to Cloud Storage resources then is granted exclusively through Cloud IAM. Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible either. 
Configuration Management
6.1.2 Ensure 'skip_show_database' database flag for Cloud SQL Mysql instance is set to 'on'

Description:

It is recommended to set skip_show_database database flag for Cloud SQL Mysql instance to on 

'skip_show_database' database flag prevents people from using the SHOW DATABASES statement if they do not have the SHOW DATABASES privilege. This can improve security if you have concerns about users being able to see databases belonging to other users. Its effect depends on the SHOW DATABASES privilege: If the variable value is ON, the SHOW DATABASES statement is permitted only to users who have the SHOW DATABASES privilege, and the statement displays all database names. If the value is OFF, SHOW DATABASES is permitted to all users, but displays the names of only those databases for which the user has the SHOW DATABASES or other privilege. This recommendation is applicable to Mysql database instances. 
Configuration Management
6.1.3 Ensure that the 'local_infile' database flag for a Cloud SQL Mysql instance is set to 'off'

Description:

The local_infile flag controls the server-side LOCAL capability for LOAD DATA statements. Depending on the local_infile setting, the server refuses or permits local data loading by clients that have LOCAL enabled on the client side. 

To explicitly cause the server to refuse LOAD DATA LOCAL statements (regardless of how client programs and libraries are configured at build time or runtime), start mysqld with local_infile disabled. local_infile can also be set at runtime. 

Due to security issues associated with the local_infile flag, it is recommended to disable it. This recommendation is applicable to MySQL database instances. 
Configuration Management
6.2.1 Ensure that the 'log_checkpoints' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling log_checkpoints causes checkpoints and restart points to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them. This parameter can only be set in the postgresql.conf file or on the server command line. This recommendation is applicable to PostgreSQL database instances. 
Configuration Management
6.2.2 Ensure 'log_error_verbosity' database flag for Cloud SQL PostgreSQL instance is set to 'DEFAULT' or stricter

Description:

The log_error_verbosity flag controls the verbosity/details of messages logged. Valid values are:

  • TERSE
  • DEFAULT
  • VERBOSE

TERSE excludes the logging of DETAIL, HINT, QUERY, and CONTEXT error information.
VERBOSE output includes the SQLSTATE error code, source code file name, function name, and line number that generated the error.
Ensure an appropriate value is set to 'DEFAULT' or stricter. 
Configuration Management
6.2.3 Ensure that the 'log_connections' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling the log_connections setting causes each attempted connection to the server to be logged, along with successful completion of client authentication. This parameter cannot be changed after the session starts. 
Configuration Management
6.2.4 Ensure that the 'log_disconnections' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling the log_disconnections setting logs the end of each session, including the session duration. 

PostgreSQL does not log session details such as duration and session end by default. Enabling the log_disconnections setting will create log entries at the end of each session which can be useful in troubleshooting issues and determine any unusual activity across a time period. The log_disconnections and log_connections work hand in hand and generally, the pair would be enabled/disabled together. This recommendation is applicable to PostgreSQL database instances.
Configuration Management
6.2.5 Ensure 'log_duration' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling the log_duration setting causes the duration of each completed statement to be logged. This does not logs the text of the query and thus behaves different from the log_min_duration_statement flag. This parameter cannot be changed after session start. 
Configuration Management
6.2.6 Ensure that the 'log_lock_waits' database flag for Cloud SQL PostgreSQL instance is set to 'on'

Description:

Enabling the log_lock_waits flag for a PostgreSQL instance creates a log for any session waits that take longer than the allotted deadlock_timeout time to acquire a lock. 
Configuration Management
6.2.7 Ensure 'log_statement' database flag for Cloud SQL PostgreSQL instance is set appropriately

Description:

The value of log_statement flag determined the SQL statements that are logged. Valid values are:

  • none
  • ddl
  • mod
  • all

The value ddl logs all data definition statements. The value mod logs all ddl statements, plus data-modifying statements.
The statements are logged after a basic parsing is done and statement type is determined, thus this does not logs statements with errors. When using extended query protocol, logging occurs after an Execute message is received and values of the Bind parameters are included.
A value of 'ddl' is recommended unless otherwise directed by your organization's logging policy. 
Configuration Management
6.2.8 Ensure 'log_hostname' database flag for Cloud SQL PostgreSQL instance is set appropriately

Description:

PostgreSQL logs only the IP address of the connecting hosts. The log_hostname flag controls the logging of hostnames in addition to the IP addresses logged. The performance hit is dependent on the configuration of the environment and the host name resolution setup. This parameter can only be set in the postgresql.conf file or on the server command line.
Configuration Management
6.2.9 Ensure 'log_parser_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'

Description:

The PostgreSQL planner/optimizer is responsible to parse and verify the syntax of each query received by the server. If the syntax is correct a parse tree is built up else an error is generated. The log_parser_stats flag controls the inclusion of parser performance statistics in the PostgreSQL logs for each query. 
Configuration Management
6.2.10 Ensure 'log_planner_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'

Description:

The same SQL query can be executed in multiple ways and still produce different results. The PostgreSQL planner/optimizer is responsible to create an optimal execution plan for each query. The log_planner_stats flag controls the inclusion of PostgreSQL planner performance statistics in the PostgreSQL logs for each query. 
Configuration Management
6.2.11 Ensure 'log_executor_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'

Description:

The PostgreSQL executor is responsible to execute the plan handed over by the PostgreSQL planner. The executor processes the plan recursively to extract the required set of rows. The log_executor_stats flag controls the inclusion of PostgreSQL executor performance statistics in the PostgreSQL logs for each query. 
Configuration Management
6.2.12 Ensure 'log_statement_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'

Description:

The log_statement_stats flag controls the inclusion of end to end performance statistics of a SQL query in the PostgreSQL logs for each query. This cannot be enabled with other module statistics (log_parser_stats, log_planner_stats, log_executor_stats). 
Configuration Management
6.2.13 Ensure that the 'log_min_messages' database flag for Cloud SQL PostgreSQL instance is set appropriately

Description:

The log_min_messages flag defines the minimum message severity level that is considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each severity level includes the subsequent levels mentioned above. 

Note: To effectively turn off logging failing statements, set this parameter to PANIC. 

ERROR is considered the best practice setting. Changes should only be made in accordance with the organization's logging policy 
Configuration Management
6.2.14 Ensure 'log_min_error_statement' database flag for Cloud SQL PostgreSQL instance is set to 'Error' or stricter

Description:

The log_min_error_statement flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each severity level includes the subsequent levels mentioned above. Ensure a value of ERROR or stricter is set. 
Configuration Management
6.2.15 Ensure that the 'log_temp_files' database flag for Cloud SQL PostgreSQL instance is set to '0' (on)

Description:

PostgreSQL can create a temporary file for actions such as sorting, hashing and temporary query results when these operations exceed work_mem. The log_temp_files flag controls logging names and the file size when it is deleted. Configuring log_temp_files to 0 causes all temporary file information to be logged, while positive values log only files whose size is greater than or equal to the specified number of kilobytes. A value of -1 disables temporary file information logging. 
Configuration Management
6.2.16 Ensure that the 'log_min_duration_statement' database flag for Cloud SQL PostgreSQL instance is set to '-1' (disabled)

Description:

The log_min_duration_statement flag defines the minimum amount of execution time of a statement in milliseconds where the total duration of the statement is logged. Ensure that log_min_duration_statement is disabled, i.e., a value of -1 is set.
Configuration Management
6.3.1 Ensure 'external scripts enabled' database flag for Cloud SQL SQL Server instance is set to 'off'

Description:

It is recommended to set external scripts enabled database flag for Cloud SQL SQL Server instance to off

external scripts enabled enable the execution of scripts with certain remote language extensions. This property is OFF by default. When Advanced Analytics Services is installed, setup can optionally set this property to true. As the External Scripts Enabled feature allows scripts external to SQL such as files located in an R library to be executed, which could adversely affect the security of the system, hence this should be disabled. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.2 Ensure that the 'cross db ownership chaining' database flag for Cloud SQL Server instance is set to 'off'

Description:

It is recommended to set cross db ownership chaining database flag for Cloud SQL SQL Server instance to off.

Use the cross db ownership for chaining option to configure cross-database ownership chaining for an instance of Microsoft SQL Server. This server option allows you to control cross-database ownership chaining at the database level or to allow cross-database ownership chaining for all databases. Enabling cross db ownership is not recommended unless all of the databases hosted by the instance of SQL Server must participate in crossdatabase ownership chaining and you are aware of the security implications of this setting. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.3 Ensure 'user connections' database flag for Cloud SQL SQL Server instance is set as appropriate

Description:

It is recommended to set user connections database flag for Cloud SQL SQL Server instance according organization-defined value.

The user connections option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server. The actual number of user connections allowed also depends on the version of SQL Server that you are using, and also the limits of your application or applications and hardware. SQL Server allows a maximum of 32,767 user connections. Because user connections is a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable. For example, if only 10 users are logged in, 10 user connection objects are allocated. In most cases, you do not have to change the value for this option. The default is 0, which means that the maximum (32,767) user connections are allowed. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.4 Ensure 'user options' database flag for Cloud SQL SQL Server instance is not configured

Description:

It is recommended that, user options database flag for Cloud SQL SQL Server instance should not be configured.

The user options option specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session. The user options option allows you to change the default values of the SET options (if the server's default settings are not appropriate). 

A user can override these defaults by using the SET statement. You can configure user options dynamically for new logins. After you change the setting of user options, new login sessions use the new setting; current login sessions are not affected. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.5 Ensure 'remote access' database flag for Cloud SQL SQL Server instance is set to 'off'

Description:

It is recommended to set remote access database flag for Cloud SQL SQL Server instance to off.

The remote access option controls the execution of stored procedures from local or remote servers on which instances of SQL Server are running. This default value for this option is 1. This grants permission to run local stored procedures from remote servers or remote stored procedures from the local server. To prevent local stored procedures from being run from a remote server or remote stored procedures from being run on the local server, this must be disabled. The Remote Access option controls the execution of local stored procedures on remote servers or remote stored procedures on local server. 'Remote access' functionality can be abused to launch a Denial-of-Service (DoS) attack on remote servers by off-loading query processing to a target, hence this should be disabled. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.6 Ensure '3625 (trace flag)' database flag for Cloud SQL SQL Server instance is set to 'off'

Description:

It is recommended to set 3625 (trace flag) database flag for Cloud SQL SQL Server instance to off.

Trace flags are frequently used to diagnose performance issues or to debug stored procedures or complex computer systems, but they may also be recommended by Microsoft Support to address behavior that is negatively impacting a specific workload. All documented trace flags and those recommended by Microsoft Support are fully supported in a production environment when used as directed. 3625(trace log) Limits the amount of information returned to users who are not members of the sysadmin fixed server role, by masking the parameters of some error messages using '******'. This can help prevent disclosure of sensitive information, hence this is recommended to disable this flag. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.3.7 Ensure that the 'contained database authentication' database flag for Cloud SQL server instance is set to 'off'

Description:

It is recommended to set contained database authentication database flag for Cloud SQL on the SQL Server instance is set to off.

A contained database includes all database settings and metadata required to define the database and has no configuration dependencies on the instance of the Database Engine where the database is installed. Users can connect to the database without authenticating a login at the Database Engine level. Isolating the database from the Database Engine makes it possible to easily move the database to another instance of SQL Server. Contained databases have some unique threats that should be understood and mitigated by SQL Server Database Engine administrators. Most of the threats are related to the USER WITH PASSWORD authentication process, which moves the authentication boundary from the Database Engine level to the database level, hence this is recommended to disable this flag. This recommendation is applicable to SQL Server database instances. 
Configuration Management
6.4 Ensure that Cloud SQL database instance requires all incoming connections to use SSL

Description:

It is recommended to enforce all incoming connections to SQL database instance to use SSL.

SQL database connections if successfully trapped (MITM); can reveal sensitive data like credentials, database queries, query outputs etc. For security, it is recommended to always use SSL encryption when connecting to your instance. This recommendation is applicable for Postgresql, MySql generation 1, MySql generation 2 and SQL Server 2017 instances. 
Configuration Management
6.5 Database Server should accept connections only from trusted Network(s)/IP(s) and restrict access from the world

Description:

Database Server should accept connections only from trusted Network(s)/IP(s) and restrict access from the world. 
Configuration Management
6.6 Ensure that Cloud SQL database instances do not have public IPs

Description:

It is recommended to configure Second Generation Sql instance to use private IPs instead of public IPs.

To lower the organization's attack surface, Cloud SQL databases should not have public IPs. Private IPs provide improved network security and lower latency for your application. 
Configuration Management
7.1 Ensure that BigQuery datasets are not anonymously or publicly accessible

Description:

It is recommended that the IAM policy on BigQuery datasets does not allow anonymous and/or public access.

Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. Such access might not be desirable if sensitive data is being stored in the dataset. Therefore, ensure that anonymous and/or public access to a dataset is not allowed.
Backup and Restoration
6.7 Ensure that Cloud SQL database instances are configured with automated backups

Description:

It is recommended to have all SQL database instances set to enable automated backups.

Backups provide a way to restore a Cloud SQL instance to recover lost data or recover from a problem with that instance. Automated backups need to be set for any instance that contains data that should be protected from loss or damage. This recommendation is applicable for SQL Server, PostgreSql, MySql generation 1 and MySql generation 2 instances. 
Data Security
7.2 Ensure that all BigQuery Tables are encrypted with Customer managed encryption key (CMEK)

Description:

BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. The data is encrypted using the data encryption keys and data encryption keys themselves are further encrypted using key encryption keys. This is seamless and do not require any additional input from the user. However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. If CMEK is used, the CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys. 
Data Security
7.3 Ensure that a Default Customer-managed encryption key (CMEK) is specified for all BigQuery Data Sets

Description:

BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. The data is encrypted using the data encryption keys and data encryption keys themselves are further encrypted using key encryption keys. This is seamless and do not require any additional input from the user. However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets.