Giter Club home page Giter Club logo

ram-config-template's People

Contributors

brunoreboul avatar marcfundenberger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

ram-config-template's Issues

Rule Guidelines for message and metadata

There are some inconsistency between rules on the violation message and metadata, and both do not seem awfully useful.
The way I see it, the message is for the human, so it should be intelligible : what is happening on what asset (name and type).
And metadata is more likely used for automation, so should contain self-link to the asset (most likely in asset.name object) and the configuration extract that has the violation (if available)

So we should have some guidelines for both message and metadata, like :
message : "< Asset type > < Asset Display Name (not asset.name when possible) > has < violation description > (< incriminated value if possible >).
metadata : {"resource":"< asset.name >", < relevant config extract if possible >}

That would give :
message : GCE Instance ddp-prt-p-as1 has an an external IP (35.195.144.192).
metadata : {
"resource" : "//compute.googleapis.com/projects/adeo-data-fabric-portal/zones/europe-west1-b/instances/ddp-prt-p-as1",
"access_config":[{"name":"External NAT","natIP":"35.195.144.192","networkTier":"PREMIUM","type":"ONE_TO_ONE_NAT"}]}

For now, lots of rule have message like :
//compute.googleapis.com/projects/adeo-data-fabric-portal/zones/europe-west1-b/instances/ddp-prt-p-as1 is not allowed to have an external IP.
A the majority of metadata consists of only the "resource" part asset.name.

As violation owner, I want guidance to fix configurations so that compliance is improved

As violation owner, I want guidance to fix configurations so that compliance is improved

Which level of granularity

By constraint

Which content structure

Three sections

  • Background
  • Fix
  • References

Which format

Native Markdown in README.md files

Who should see the documentation

Any violation owner or resolver

Where to host documentation

Each README.md file is hosted in the same folder as the rule or the constraint it relates to.
In case all violation owners / resolver do not have access to the RAM config git repo, use a script to update a segregated repo or shared drive.

How to craft links to documentation

Example for Google Cloud Source:
https://source.cloud.google.com/<projectID>/<repoName>/+/master:services/monitor/instances/<ruleFolder>/README.md

Examples for GitHub:
https://github.com/<account>/<repoName>/blob/master/services/monitor/instances/<ruleFolder>/constraints/<constraintFolder>README.md

Remove asset without rules from solution.yaml default resource list as it as not compliance rule and may weight a lot in Cloud function execution cost

Issue

dataproc.googleapis.com/Job asset json does not contain any valuable data to build a configuration compliance rule (only the location, that may already be controlled from dataproc.googleapis.com/Cluster location.

The associate realtine feed throughput may be in top 3 when Dataproc is used massively leading to wasting Cloud Function GB/s GHzs and incocation.

not rules for:
bigtableadmin.googleapis.com/Instance
compute.googleapis.com/ForwardingRule
compute.googleapis.com/GlobalForwardingRule
compute.googleapis.com/Project
compute.googleapis.com/Snapshot
compute.googleapis.com/SslCertificate
dataproc.googleapis.com/Cluster
iam.googleapis.com/ServiceAccount
serviceusage.googleapis.com/Service
spanner.googleapis.com/Instance

Cause

Was part of this initial RAM POC, and has not been challenged yet

Fix

just remove

Bug: rule clouddns_dnssec misses dns zones without denssec

Example of DNS zone with DNSSEC ON:

            "data": {
                "creationTime": "2020-11-10T05:19:50.115Z",
                "description": "",
                "dnsName": "blabla.",
                "dnssecConfig": {
                    "defaultKeySpecs": [
                        {
                            "algorithm": "RSASHA256",
                            "keyLength": 2048,
                            "keyType": "KEY_SIGNING"
                        },
                        {
                            "algorithm": "RSASHA256",
                            "keyLength": 1024,
                            "keyType": "ZONE_SIGNING"
                        }
                    ],
                    "nonExistence": "NSEC3",
                    "state": "ON"
                },
                "id": "7793153442041723502",
                "name": "zone02",
                "nameServers": [
                    "ns-cloud-b1.googledomains.com.",
                    "ns-cloud-b2.googledomains.com.",
                    "ns-cloud-b3.googledomains.com.",
                    "ns-cloud-b4.googledomains.com."
                ],
                "visibility": "PUBLIC"
            }

Example of DNS zone where DNSSEc is not configured

            "data": {
                "creationTime": "2020-11-10T05:13:02.487Z",
                "description": "",
                "dnsName": "01.ramtests.brunore.org.",
                "id": "825172713930213166",
                "name": "zone01",
                "nameServers": [
                    "ns-cloud-a1.googledomains.com.",
                    "ns-cloud-a2.googledomains.com.",
                    "ns-cloud-a3.googledomains.com.",
                    "ns-cloud-a4.googledomains.com."
                ],
                "visibility": "PUBLIC"
            }

So, the full dnssecConfig object is missing

Currently hte rule is directly looking for the state value, without checkng if its parent object exist:

    asset.resource.data.dnssecConfig.state != "ON"

This leads to assess all dns zone without DNSSEc as compliant, which is a bug.

Proposed fix: use lib.get_default, first to return an empty object when dnssecConfig is missing then second to return the string OFF when state is missing.

    dnssecConfig := lib.get_default(asset.resource.data, "dnssecConfig", {})
    state := lib.get_default(dnssecConfig, "state", "OFF")
    trace(sprintf("state: %v", [state]))
    state != "ON"

Testing locally now provide the expected results:

> package validator.gcp.lib
> notes
> audit
query:1                                            Enter data.validator.gcp.lib.audit = _
dnssec/modules/audit.rego:22                       | Enter data.validator.gcp.lib.audit
dnssec/modules/audit.rego:25                       | | Note "asset name: //dns.googleapis.com/projects/blabla"
dnssec/modules/audit.rego:31                       | | Note "constraint kind: GCPDNSSECConstraintV1"
dnssec/modules/audit.rego:46                       | | Note "asset.ancestry_path: blabla"
dnssec/modules/audit.rego:47                       | | Note "targets: [\"organization/\"]"
dnssec/modules/audit.rego:48                       | | Note "is in scope:%!(EXTRA string=true)"
dnssec/modules/audit.rego:60                       | | Note "exclusions: null"
dnssec/modules/audit.rego:61                       | | Note "Excluded if count exclusion_match > 0: 0"
dnssec/modules/monitor_clouddns_dnssec.rego:19     | | Enter data.templates.gcp.GCPDNSSECConstraintV1.deny
dnssec/modules/monitor_clouddns_dnssec.rego:28     | | | Note "state: ON"
dnssec/modules/audit.rego:25                       | | Note "asset name: //dns.googleapis.com/projects/blabla"
dnssec/modules/audit.rego:31                       | | Note "constraint kind: GCPDNSSECConstraintV1"
dnssec/modules/audit.rego:46                       | | Note "asset.ancestry_path: blabla"
dnssec/modules/audit.rego:47                       | | Note "targets: [\"organization/\"]"
dnssec/modules/audit.rego:48                       | | Note "is in scope:%!(EXTRA string=true)"
dnssec/modules/audit.rego:60                       | | Note "exclusions: null"
dnssec/modules/audit.rego:61                       | | Note "Excluded if count exclusion_match > 0: 0"
dnssec/modules/monitor_clouddns_dnssec.rego:19     | | Enter data.templates.gcp.GCPDNSSECConstraintV1.deny
dnssec/modules/monitor_clouddns_dnssec.rego:28     | | | Note "state: OFF"```

Incompatible location for gae.region and gcf.region in sample solution.yaml

Describe the bug

Had to set location parameters for the Google App Engine (gae.region) and Google Cloud Function (gcf.region) in solution.yaml to the same value.

Indeed App Engine instance has been created in europe-west3 (even though the location parameter was europe-west) whereas the Cloud Function location was set to europe-west1

To Reproduce

Setup solution.yaml

  gae:
    region: europe-west
  gcf:
    region: europe-west3

Deploy pipeline with RAM CLI fails for some pipelines with the following error:

Step #3 - "deploy instance dumpinventory_org675623659646_cloudresourcemanager_Folder": 2021/05/11 07:23:51 CloudSchedulerClient.GetJob rpc error: code = InvalidArgument desc = Location must equal europe-west3 because the App Engine app that is associated with this project is located in europe-west3
Finished Step #3 - "deploy instance dumpinventory_org675623659646_cloudresourcemanager_Folder"
ERROR
ERROR: build step 3 "gcr.io/cloud-builders/gcloud" failed: step exited with non-zero status: 1

Expected behavior

RAM CLI should check if the actual App Engine instance location is compatible with the Cloud Function location set in solution.yaml and emit a warning or error or merge the two location parameters.

Bug: gce_compute_zone misses some non compliance and addresses two different types of assets

Issue one:
The rule is coded to deal with instances and disk

{asset.asset_type} == {asset.asset_type} & {"compute.googleapis.com/Instance", "compute.googleapis.com/Disk"}

By design in RAM, rules are deployed as could function triggered by a pubsub topic that carry only one type of assets, so this rule needs to be split in two, one for instances, one for disks

  • gce_instance_location
  • gce_disk_location

issue two
The code base on zone string manipulation misses some non compliances. to be replace by a simpler code (like the one used in bq dataset location) based on the location field instead of the zone filed.

warning: the location filed is in asset.resource not in asset.resource.data. It contains the name of the zone, while the constraints list region.

so, the asset region need to be deducted from the asset zone, using:

	asset_zone := asset.resource.location
    zone_parts := split(asset_zone, "-")
    region_parts := array.slice(zone_parts,0,2)
    asset_location := concat("-", region_parts)
    trace(sprintf("asset_location: %v", [asset_location]))

cloudsql_maintenance rule miss non compliant instances

the current REGO rules looks for instance with a defined hour settings, but instance with no defined maintenance window appers now as;

                "maintenanceWindow": {
                    "day": 0,
                    "hour": 0,
                    "kind": "sql#maintenanceWindow"
                },

So day is a better setting to control instead of hour as dqys index is"

  • 0 maintenance window not defined
  • 1 Monday
  • 2 Tuesday
  • 3 Wednesday
  • 4 Thursday
  • 5 Friday
  • 6 Saturday
  • 7 Sunday

While hours index is:

  • 0 0am to 1am UTC
  • up to 23

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.