Giter Club home page Giter Club logo

cloud-validation-framework's Introduction

Introduction

Prancer is a pre-deployment and post-deployment multi-cloud security platform for your Infrastructure as Code (IaC) and live cloud resources. It shifts the security to the left and provides end-to-end security scanning based on the Policy as Code concept. DevOps engineers can use it for static code analysis on IaC to find security drifts and maintain their cloud security posture with continuous compliance features. you can get more information from our website at : https://www.prancer.io

prerequisites

  • Linux-based OS
  • Python 3.6.8 / 3.8 or 3.9
  • mongo database (optional)

Note: mongo database is not a hard requirement to run prancer basic platform. It is possible to run the framework and write all the outputs to the file system. To learn more, you can review prancer documentation

Running Prancer from the code

You can run Prancer Basic Platform from your file system or the database. There are three modes available:

  • --db NONE It means all the files are expected to be on the file system, and the results also will be written on the file system.
  • --db SNAPSHOT It means all the configuration files and output files will be written on the filesystem. but the resource snapshots are being kept in the database
  • --db FULL It means all the configuration files and snapshots are stored in the database

Running Prancer with no database

  • Clone the Prancer repository at https://github.com/prancer-io/cloud-validation-framework.git

  • cd cloud-validation-framework

  • Install the dependent packages as present in requirements.txt pip3 install -r requirements.txt

  • export the following variables:

    export BASEDIR=`pwd`
    export PYTHONPATH=$BASEDIR/src
    export FRAMEWORKDIR=$BASEDIR
    
  • Run the sample scenario from the filesystem: python3 utilities/validator.py gitScenario --db NONE

  • Review the result cat realm/validation/gitScenario/output-test.json

For more scenarios, visit our Hello World application at : https://github.com/prancer-io/prancer-hello-world

Running Prancer with no database in a virtual environment

git clone https://github.com/prancer-io/cloud-validation-framework.git

cd cloud-validation-framework

make sure python virtual environment is installed and set up. (https://docs.python.org/3/tutorial/venv.html)

python3 -m venv tutorial-env

source tutorial-env/bin/activate

pip install -r requirements.txt

export the following variables:

export BASEDIR=`pwd`
export PYTHONPATH=$BASEDIR/src
export FRAMEWORKDIR=$BASEDIR

Run the sample scenario from the filesystem: python utilities/validator.py gitScenario --db NONE

Review the result cat realm/validation/gitScenario/output-test.json

How to run crawler

Whenever you have the master snapshot configuration files available, you need to first run the crawler. Crawler finds individual objects from the target provider based on the master snapshot configuration file guidance. And generate snapshot configuration files that contains the reference to individual objects. You can crawl a target environment by specifying --crawler to your command.

python utilities/validator.py gitScenario --db NONE --crawler

To understand more about the crawling, check our documentation at : https://docs.prancer.io/crawler/crawler-definition/

How to upload files to database and run prancer from database

First, make sure you have the MongoDB up and running. you can refer to this documentation from MongoDB: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/

Edit config.ini and add these lines if they are not already there

  [MONGODB]
  dburl = mongodb://localhost:27017/validator
  dbname = validator
  COLLECTION = resources
  SNAPSHOT = snapshots
  TEST = tests
  STRUCTURE = structures
  MASTERSNAPSHOT = mastersnapshots
  MASTERTEST = mastertests
  OUTPUT = outputs
  NOTIFICATIONS = notifications

You can use populate_json.py file in the utilities folder to upload files from filesystem to mongodb:

upload connectors

python utilities/populate_json.py scenario-pass --file connector.json

Check the DB's structures collection to make sure the connector is uploaded successfully.

upload snapshots

python utilities/populate_json.py scenario-pass --file snapshot.json

Check the DB's snapshots collection to make sure the snapshot is uploaded successfully.

upload tests

python utilities/populate_json.py scenario-pass --file test.json

Check the DB's tests collection to make sure the test is uploaded successfully.

Note: You can do the same for the scenario fail.

Now you can run the framework from the database: python utilities/validator.py scenario-fail --db FULL

Check the DB's webserver and outputs collection in mongoDB to see the results.

what are the environment variables

We have three environment variables that need to be set before running the code.

export BASEDIR=`pwd`
export PYTHONPATH=$BASEDIR/src
export FRAMEWORKDIR=$BASEDIR

BASEDIR is the base directory for the codebase. It is the folder you have cloned your git repository to.

PYTHONPATH is where the code resides. It is in the src folder inside the cloned directory.

FRAMEWORKDIR is where the configuration files available. We expect config.ini available in this directory. other folders are referenced in the config.ini

Debugging with VSCode

Make sure these files exists under .vscode folder

  • launch.json
  • settings.json

The content of these files are as follows:

launch.json

{
    "version": "0.2.0",
    "configurations": [
        {
            "env": {
                "BASEDIR": "${workspaceFolder}",
                "PYTHONPATH": "${workspaceFolder}/src",
                "FRAMEWORKDIR": "${workspaceFolder}"
            },
            "name": "Python: Current File",
            "type": "python",
            "request": "launch",
            "program": "${file}",
            "console": "integratedTerminal",
            "python": "${command:python.interpreterPath}",
            "args": [
                "gitScenario"
            ]
        }
    ]
}

In the args attribute, you will put the name of the collection you want to run the code for. For example, we have a gitScenario you can use for testing purposes.

settings.json

{
    "python.pythonPath": "testenv/bin/python"
}

In python.pythonPath file you put the path to your python. In the above example, we are using a virtual python environment testenv

Note : These files already available in our repository and you can modify them based on your requirements.

This document helps you how to do debugging of Python applications in VSCode : https://code.visualstudio.com/docs/python/debugging

Further documentation

To learn more about the Prancer Platform, review our documentation site

cloud-validation-framework's People

Contributors

abinayamahalingam avatar ajeybk avatar crazycodr avatar danielalejandrohc avatar devtrap avatar dharmeshswan avatar farchide avatar harshilswan avatar ishan-pansuriya avatar jaiminswan avatar kamyab96 avatar markupdesignltd avatar mehros avatar mr-maxo avatar navendu321 avatar patidarpraveen244 avatar prancergithub avatar raghumannn avatar saberi-prancer avatar saberima avatar shahinshirvani avatar vatsalgit5118 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-validation-framework's Issues

Log "ascii" code error

'ascii' codec can't encode characters in position 588-590: ordinal not in range(128)" exception while log the compliance description

prancer cyberark integration

Currently, prancer validation framework can connect to the Azure Keyvault and read the secrets from there.
There are many companies using Cyber Ark (https://www.cyberark.com/) as the secret management tool.
Prancer validation framework should be able to connect to the Cyber Ark and read the secrets.
the configuration values should be placed in the config.ini file.

Result SKIPPED should be available just in the DEBUG mode

When running the IaC Scan with prancer platform, every single template will be scanned with all available tests. If the type of the resource is different from the test, then it shows RESULT: SKIPPED on the screen.
this information should be available in the DEBUG mode. in other log types, we should hide this info to be shown on screen.
only the test cases with the Passed and Failed results should be visible on the screen.

google connector structure change

currently client_x509_cert_url info is in the header of the connector. This will change for each service account, so we should move it to the service account section

{
        "organization": "company1",
        "type": "google",
        "fileType": "structure",
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://oauth2.googleapis.com/token",
        "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
        "client_x509_cert_url": "<client_x509_cert_url>",
        "projects": [
            {
                "project-name": "<project-name>",
                "project-id": "<project-id>",
                "users": [
                    {
                        "name": "<service-account-name>",
                        "type": "service_account",
                        "private_key_id": "<private-key-id>",
                        "private_key": "<private-key>",
                        "client_email": "<client-email>",
                        "client_id": "<client-id>"
                    }
                ]
            }
        ]
    }

git connector private ssh attributes

if we are doing authentication to access a git repo, gitProvider and sshHost are not the same here:

{
    "fileType": "structure",
    "companyName": "Organization name",
    "gitProvider": "<url-to-repository>",
    "branchName": "<branch>",
    "sshKeyfile": "<path-to-private-ssh-key-file>",
    "sshUser": "<username-of-repo>",
    "sshHost": "<hostname-of-repo>",
    "private": true
}

for example in Azure DevOps repo example:
"gitProvider": "[email protected]:v3/orgname/whitekite/whitekite"
and
"sshHost": "https://orgna,e.visualstudio.com/whitekite/_git/whitekite"

we need to update this page:
http://docs.prancer.io/connectors/git/

--crawler is fetching unnecessary aws and gcp snapshots along with azure in master-snapshot_gen.json for scenario-terraform-azure

I was just trying to crawle the available snapshots for terraform azure and did ran prancer --crawler scenario-terraform-azure from prancer-hello-world app.

In the master-snapshot_gen.json i saw it crawles snapshot from aws and gcp along with azure.

those are some unnecessary entries that is not needed for testing azure test cases.

Below is the content of the master-snapshot_gen.json generated from prancer --crawler scenario-terraform-azure

{
  "$schema": "",
  "contentVersion": "1.0.0.0",
  "fileType": "snapshot",
  "snapshots": [
    {
      "type": "filesystem",
      "connectorUser": "USER_1",
      "nodes": [
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/sqs/terraform.tfvars",
            "/aws/sqs/vars.tf",
            "/aws/sqs/provider.tf",
            "/aws/sqs/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT1",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/security_group/terraform.tfvars",
            "/aws/security_group/vars.tf",
            "/aws/security_group/provider.tf",
            "/aws/security_group/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT2",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/rds/terraform.tfvars",
            "/aws/rds/vars.tf",
            "/aws/rds/provider.tf",
            "/aws/rds/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT3",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/efs/terraform.tfvars",
            "/aws/efs/vars.tf",
            "/aws/efs/provider.tf",
            "/aws/efs/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT4",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/sns/terraform.tfvars",
            "/aws/sns/vars.tf",
            "/aws/sns/provider.tf",
            "/aws/sns/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT5",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/cloudtrail/terraform.tfvars",
            "/aws/cloudtrail/vars.tf",
            "/aws/cloudtrail/provider.tf",
            "/aws/cloudtrail/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT6",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/elasticsearch/terraform.tfvars",
            "/aws/elasticsearch/vars.tf",
            "/aws/elasticsearch/provider.tf",
            "/aws/elasticsearch/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT7",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/s3/terraform.tfvars",
            "/aws/s3/vars.tf",
            "/aws/s3/provider.tf",
            "/aws/s3/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT8",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/s3/terraform.tfvars",
            "/aws/s3/vars.tf",
            "/aws/s3/provider.tf",
            "/aws/s3/s3.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT9",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/lambda/terraform.tfvars",
            "/aws/lambda/vars.tf",
            "/aws/lambda/provider.tf",
            "/aws/lambda/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT10",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/redshift/terraform.tfvars",
            "/aws/redshift/vars.tf",
            "/aws/redshift/provider.tf",
            "/aws/redshift/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT11",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/ecs/terraform.tfvars",
            "/aws/ecs/vars.tf",
            "/aws/ecs/provider.tf",
            "/aws/ecs/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT12",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/cloudfront/terraform.tfvars",
            "/aws/cloudfront/vars.tf",
            "/aws/cloudfront/provider.tf",
            "/aws/cloudfront/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT13",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/dynamodb/terraform.tfvars",
            "/aws/dynamodb/vars.tf",
            "/aws/dynamodb/provider.tf",
            "/aws/dynamodb/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT14",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/elb/terraform.tfvars",
            "/aws/elb/vars.tf",
            "/aws/elb/provider.tf",
            "/aws/elb/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT15",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/vpc/terraform.tfvars",
            "/aws/vpc/vars.tf",
            "/aws/vpc/provider.tf",
            "/aws/vpc/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT16",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/acm/terraform.tfvars",
            "/aws/acm/vars.tf",
            "/aws/acm/provider.tf",
            "/aws/acm/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT17",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/sg/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT18",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/sg/extrasg.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT19",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/ec2/terraform.tfvars",
            "/aws/ec2/vars.tf",
            "/aws/ec2/provider.tf",
            "/aws/ec2/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT20",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/aws/eks/terraform.tfvars",
            "/aws/eks/vars.tf",
            "/aws/eks/provider.tf",
            "/aws/eks/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT21",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/aks/terraform.tfvars",
            "/azure/aks/vars.tf",
            "/azure/aks/provider.tf",
            "/azure/aks/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT22",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/applicationgateways/terraform.tfvars",
            "/azure/applicationgateways/vars.tf",
            "/azure/applicationgateways/provider.tf",
            "/azure/applicationgateways/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT23",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/diagnosticsettings/terraform.tfvars",
            "/azure/diagnosticsettings/vars.tf",
            "/azure/diagnosticsettings/provider.tf",
            "/azure/diagnosticsettings/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT24",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/mssql_servers/terraform.tfvars",
            "/azure/mssql_servers/vars.tf",
            "/azure/mssql_servers/provider.tf",
            "/azure/mssql_servers/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT25",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/sql_servers/terraform.tfvars",
            "/azure/sql_servers/vars.tf",
            "/azure/sql_servers/provider.tf",
            "/azure/sql_servers/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT26",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/keyvaultsecret/terraform.tfvars",
            "/azure/keyvaultsecret/vars.tf",
            "/azure/keyvaultsecret/provider.tf",
            "/azure/keyvaultsecret/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT27",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/vm/terraform.tfvars",
            "/azure/vm/vars.tf",
            "/azure/vm/provider.tf",
            "/azure/vm/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT28",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/securitycenterpricing/terraform.tfvars",
            "/azure/securitycenterpricing/vars.tf",
            "/azure/securitycenterpricing/provider.tf",
            "/azure/securitycenterpricing/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT29",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/vnetpeerings/terraform.tfvars",
            "/azure/vnetpeerings/vars.tf",
            "/azure/vnetpeerings/provider.tf",
            "/azure/vnetpeerings/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT30",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/storageaccounts/terraform.tfvars",
            "/azure/storageaccounts/vars.tf",
            "/azure/storageaccounts/provider.tf",
            "/azure/storageaccounts/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT31",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/containerregistry/terraform.tfvars",
            "/azure/containerregistry/vars.tf",
            "/azure/containerregistry/provider.tf",
            "/azure/containerregistry/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT32",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/nsg/terraform.tfvars",
            "/azure/nsg/vars.tf",
            "/azure/nsg/provider.tf",
            "/azure/nsg/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT33",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/vnetsubnets/terraform.tfvars",
            "/azure/vnetsubnets/vars.tf",
            "/azure/vnetsubnets/provider.tf",
            "/azure/vnetsubnets/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT34",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/azure/securitycentercontact/terraform.tfvars",
            "/azure/securitycentercontact/vars.tf",
            "/azure/securitycentercontact/provider.tf",
            "/azure/securitycentercontact/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT35",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/compute_disk/terraform.tfvars",
            "/gcp/compute_disk/vars.tf",
            "/gcp/compute_disk/provider.tf",
            "/gcp/compute_disk/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT36",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/compute_subnetwork/terraform.tfvars",
            "/gcp/compute_subnetwork/vars.tf",
            "/gcp/compute_subnetwork/provider.tf",
            "/gcp/compute_subnetwork/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT37",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/compute_firewall/terraform.tfvars",
            "/gcp/compute_firewall/vars.tf",
            "/gcp/compute_firewall/provider.tf",
            "/gcp/compute_firewall/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT38",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/compute_instance/terraform.tfvars",
            "/gcp/compute_instance/vars.tf",
            "/gcp/compute_instance/provider.tf",
            "/gcp/compute_instance/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT39",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/sql_db_instance/terraform.tfvars",
            "/gcp/sql_db_instance/vars.tf",
            "/gcp/sql_db_instance/provider.tf",
            "/gcp/sql_db_instance/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT40",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/storage_bucket/terraform.tfvars",
            "/gcp/storage_bucket/vars.tf",
            "/gcp/storage_bucket/provider.tf",
            "/gcp/storage_bucket/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT41",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/compute_network/terraform.tfvars",
            "/gcp/compute_network/vars.tf",
            "/gcp/compute_network/provider.tf",
            "/gcp/compute_network/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT42",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/container_cluster/terraform.tfvars",
            "/gcp/container_cluster/vars.tf",
            "/gcp/container_cluster/provider.tf",
            "/gcp/container_cluster/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT43",
          "status": "active",
          "validate": true
        },
        {
          "masterSnapshotId": "TRF_TEMPLATE_SNAPSHOT",
          "type": "terraform",
          "collection": "terraformtemplate",
          "paths": [
            "/gcp/dns_managed_zone/terraform.tfvars",
            "/gcp/dns_managed_zone/vars.tf",
            "/gcp/dns_managed_zone/provider.tf",
            "/gcp/dns_managed_zone/main.tf"
          ],
          "snapshotId": "TRF_TEMPLATE_SNAPSHOT44",
          "status": "active",
          "validate": true
        }
      ],
      "testUser": "Shahin",
      "source": "gitConnectorTerraform"
    }
  ]
}

the ability to security scan a single resource file

We should add functionality to the prancer to be able to security scan a single resource file.

The command would be prancer --file <path to the file> --file-type <type of the file>

the output will be shown on screen, there is no need for that to be persistent in a filesystem or db

secrets in connector files

You can keep the secret access as an environment variable. The name of the environment variable will be the name of the IAM account. For example, if the name of the IAM account is prancer_iam and the secret is a1b2c3 :

export prancer_iam=a1b2c3

This should work for all the clouds

`filesystem` connector problem when using `folderPath`

When using folderPath in the filesystem connector to test a local file, prancer raises an error looking for the config.ini file inside the data folder.

steps to reproduce:

connector file:

  {
      "fileType": "structure",
      "type": "filesystem",
      "companyName": "prancer",
      "folderPath":"/tmp/repos/prancer-hello-world/validation/terraform-test/data"
  }

running prancer gives the following error:

prancer terraform-test

  2021-07-12 11:53:21,988(cli_validator: 134) FRAMEWORDIR environment variable NOT SET, searching in current directory.
  2021-07-12 11:53:21,996(cli_validator: 135) Configuration(/tmp/repos/prancer-hello-world/validation/terraform-test/dataconfig.ini) INI file does not exist!

it searches for the config.ini file inside the data folder which is not correct.

This behavior is observed in the python3 virtual env.

when there is an error in connector, we should not continue running the compliance tests

...
2020-05-21 10:43:17,318(snapshot_custom: 310) - Repopath: /tmp/tmpfzusvxrs
2020-05-21 10:43:17,320(snapshot_custom: 331) - SSH (private:YES) giturl: [email protected]:prancer-io/prancer-compliance-test.git, Repopath: /tmp/tmpfzusvxrs
2020-05-21 10:43:17,320(snapshot_custom: 335) - Git connector points to a non-existent ssh keyfile!
2020-05-21 10:43:17,320(snapshot_custom: 474) - Repo path: /tmp/tmpfzusvxrs
2020-05-21 10:43:17,322(snapshot: 79) - Snapshot: {'1': False}
2020-05-21 10:43:17,322(validation: 159) - Starting validation tests
2020-05-21 10:43:17,323(validation: 162) - /mnt/c/Users/Farshid/source/repos/prancer-hello-world/./validation/scenario-ssh
...

Multiple words when tokenized caused wrong comparison

{value_1}.parameters.vmssNodePool.metadata.description='Boolean flag to turn on and off of virtual machine scale sets'

The tokenized out on the RHS was:
Booleanflagtoturnonandoffofvirtualmachinescalesets

Now Anything in quotes should be taken as is.

Output structure does not maintain same if the Rego file is incorrect

Tested the scenario-arm-fail testcase from Prancer Hello World repository.

  1. Run the Crawler. prancer scenario-arm-fail --db=NONE --crawler
  2. Run the Compliance. prancer scenario-arm-fail --db=NONE --compliance
  3. It generates the output as follows, where eval field contains the actual eval rule. data.rule.check_value.
{
  "$schema": "",
  "contentVersion": "1.0.0.0",
  "fileType": "output",
  "timestamp": 1625895841477,
  "snapshot": "master-snapshot_gen",
  "container": "scenario-arm-fail",
  "log": "",
  "test": "master-test.json",
  "results": [
    {
      "eval": "data.rule.check_value",
      "result": "failed",
      "message": "",
      "id": "1",
      "remediation_description": null,
      "remediation_function": null,
      "snapshots": [
        {
          "id": "value_1",
          "structure": "filesystem",
          "reference": "master",
          "source": "gitConnectorArm",
          "collection": "arm",
          "type": "arm",
          "region": "",
          "paths": [
            "/AKS/aks.azuredeploy.json",
            "/AKS/aks.azuredeploy.parameters.json"
          ]
        }
      ],
      "autoRemediate": false,
      "masterTestId": "TEST_value",
      "masterSnapshotId": [
        "value_"
      ],
      "type": "rego",
      "rule": "file(samp.rego)",
      "snapshotId": [
        "value_1"
      ],
      "status": "enable"
    },
    {
      "result": "failed",
      "snapshots": [
        {
          "id": "value_1",
          "structure": "filesystem",
          "reference": "master",
          "source": "gitConnectorArm",
          "collection": "arm",
          "type": "arm",
          "region": "",
          "paths": [
            "/AKS/aks.azuredeploy.json",
            "/AKS/aks.azuredeploy.parameters.json"
          ]
        }
      ],
      "autoRemediate": false,
      "title": "",
      "description": "",
      "rule": "{value_1}.parameters.resourceName.type='integer'",
      "testId": "TEST_type",
      "status": "enable"
    }
  ]
}
  1. Now change the rego rule in samp.rego as follow:
package rule

default check_value = false
check_value  {
    resource := input.parameters.enableRBAC.defaultValue
    resource == true
}

check_value = false {
    resource := input.parameters.enableRBAC.defaultValue
    resource == true
}
  1. Run the Compliance. prancer scenario-arm-fail --db=NONE --compliance
  2. It generates the output as follows, where the value of eval is set as a list.
{
  "$schema": "",
  "contentVersion": "1.0.0.0",
  "fileType": "output",
  "timestamp": 1625896593953,
  "snapshot": "master-snapshot_gen",
  "container": "scenario-arm-fail",
  "log": "",
  "test": "master-test.json",
  "results": [
    {
      "eval": [
        {
          "id": "1",
          "eval": "data.rule.check_value"
        }
      ],
      "result": "failed",
      "message": "",
      "snapshots": [
        {
          "id": "value_1",
          "structure": "filesystem",
          "reference": "master",
          "source": "gitConnectorArm",
          "collection": "arm",
          "type": "arm",
          "region": "",
          "paths": [
            "/AKS/aks.azuredeploy.json",
            "/AKS/aks.azuredeploy.parameters.json"
          ]
        }
      ],
      "autoRemediate": false,
      "masterTestId": "TEST_value",
      "masterSnapshotId": [
        "value_"
      ],
      "type": "rego",
      "rule": "file(samp.rego)",
      "snapshotId": [
        "value_1"
      ],
      "status": "enable"
    },
    {
      "result": "failed",
      "snapshots": [
        {
          "id": "value_1",
          "structure": "filesystem",
          "reference": "master",
          "source": "gitConnectorArm",
          "collection": "arm",
          "type": "arm",
          "region": "",
          "paths": [
            "/AKS/aks.azuredeploy.json",
            "/AKS/aks.azuredeploy.parameters.json"
          ]
        }
      ],
      "autoRemediate": false,
      "title": "",
      "description": "",
      "rule": "{value_1}.parameters.resourceName.type='integer'",
      "testId": "TEST_type",
      "status": "enable"
    }
  ]
}

Jenkinsfile for the new release pipeline

to create a Jenkinsfile for the release of the project.
here are the high level steps:
Prancer Basic Jenkinsfile

for a new machine:
use a clean docker image with python and pip3 installed on top (Ubuntu)
then start the installation of the prancer framework:

pip install -r requirements.txt
(or pip3)

export CURDIR=<working dir>
cd $CURDIR
The source code to checkout is https://github.com/prancer-io/cloud-validation-framework.git from the master branch
cd $CURDIR/cloud-validation-framework
export FRAMEWORKDIR=`pwd`
export PYTHONPATH=$CURDIR/cloud-validation-framework/src:$PYTHONPATH
cd $CURDIR/cloud-validation-framework
py.test --cov=processor tests/processor --cov-report term-missing
git tag -a V0.1.4 -m "new tag for db url change release"
git push origin --tags
#make sure there is no build or dist directory. if there is, delete them
python setup.py sdist bdist_wheel
after running this there is a new build and dist directory
Create a new release in github at https://github.com/prancer-io/cloud-validation-framework/releases push on "Draft a new release" and use the tag VX.X.X
#then upload the wheel and tar package in github nad give it a proper title
#then click on publish

#now we have to update PyPi
pip install --upgrade twine
Release the package and source distribution in pypi.org

twine upload dist/*

Output JSON file contains empty "path" and "reference" while test with google connector

Once the test run is completed, it creates the output JSON file.
In output JSON file it contains the empty "path" and "reference" fields. In database it also contains the empty fields.

Here is output file,

{
  "$schema": "",
  "contentVersion": "1.0.0.0",
  "fileType": "output",
  "timestamp": 1570074285019,
  "snapshot": "snapshot8",
  "container": "container8",
  "log": "logs_20191003034436",
  "test": "test8.json",
  "results": [
    {
      "snapshots": [
        {
          "path": "",
          "source": "google-connector",
          "id": "1",
          "reference": "",
          "structure": "google"
        }
      ],
      "testId": "1",
      "result": "failed",
      "rule": "exist({1}.name)"
    }
  ]
}

WARNING: The scripts populate_json, prancer, register_key_in_azure_vault, terraform_to_json and validator are installed in '/home/r4redu/.local/bin' which is not on PATH.

In my ubuntu 20.04.2.0, i have python 2.7.18 and pip3 20.0.2.
I was trying to install prancer-basic via pip3 install prancer-basic
It get installed successfully with below warning:

WARNING: The scripts populate_json, prancer, register_key_in_azure_vault, terraform_to_json and validator are installed in '/home/r4redu/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed prancer-basic-1.1.17

Now when i try to see the version via prancer --version it is saying prancer: command not found.

Do i need to do some manual work here to get it working?

color code the output

Currently the on-screen output of running the framework is all in mono color. This makes it hard for the user to find failed / passed tests among all other information on screen.
We can change the result of the tests in green / red based on the passed/failed scenarios

1) create special crawler to crawl azure role definitions

  1. create special crawler to crawl azure role definitions ( resource type : "Microsoft.Authorization/roleDefinitions")
    These resources do not get listed during the crawling operation, so specific queries have to be made for the resource.
    Add the framewpork for extending to these type of resources.

multiple service accounts in the connector

It is possible to put multiple service accounts in the connector. for example we can put multiple SPNs in the azure connector.
When we want to have the secret as an environment variable, the name of the env variable should be the name of the spn.

connector:

{
    "filetype": "structure",
    "type": "azure",
    "companyName": "Company Name",
    "tenant_id": "<tenant1>",
    "accounts": [
        {
            "department": "Unit/Department name",
            "subscription": [
                {
                    "subscription_name": "prancer-dev",
                    "subscription_description": "Subscription (Account) description",
                    "subscription_id": "<sub1>",
                    "users": [
                        {
                            "name": "prancer_1",
                            "client_id": "<id1>"
                        },
                        {
                            "name": "prancer_2",
                            "client_id": "<id2>"
                        }
                    ]
                }
            ]
        }
    ]
}

then to set the secret:

export prancer_1=`secret1`
export prancer_2=`secret2`

This should work across all the clouds.

subscription selector for azure does not work in the snapshot configuration file

snapshot file:

{
  "$schema": "",
  "contentVersion": "1.0.0.0",
  "fileType": "snapshot",
  "snapshots": [
    {
      "source": "azureConnector",
      "testUser": "prancer_ro",
      "subscriptionId": "subid2",
      "nodes": [
        {
          "snapshotId": "1",
          "type": "Microsoft.KeyVault/vaults",
          "collection": "Microsoft.KeyVault",
          "path": "/resourceGroups/nprod-dev-eastus2-cluster01-rg/providers/Microsoft.KeyVault/vaults/nprod-dev-eastus2-kv0001/",
          "status": "active"
        }
      ]
    }
  ]
}

connector:

{
    "filetype":"structure",
    "type":"azure",
    "companyName": "Company Name",
    "tenant_id": "tenantid",
    "accounts": [
        {
            "department": "Unit/Department name",
            "subscription": [
                {
                    "subscription_name": "prancer-dev",
                    "subscription_description": "Subscription (Account) description",
                    "subscription_id": "subid1",
                    "users": [
                        {
                            "name":"prancer_ro",
                            "client_id": "clientid1",
                            "client_secret": "secret1"
                        }
                    ]
                },
                {
                    "subscription_name": "developers",
                    "subscription_description": "Subscription (Account) description",
                    "subscription_id": "subid2",
                    "users": [
                        {
                            "name":"prancer_ro",
                            "client_id": "clientid1",
                            "client_secret": "secret1"
                        }
                    ]
                }
            ]
        }
    ]
}

the snapshot path is related to subid1, but when you put subid2 in snapshot conf file, it still take the snapshot from subid1

google connector private key as a path to the key file

google service accounts need to have a private key to connect to the google API.
in the google connector , we should have three options:

  1. put the key in the config.ini file (just for testing purposes)
  2. have the path to the key in the config.ini
  3. put the key in the vault

the current google connector is like this:

 {
        "organization": "company1",
        "type": "google",
        "fileType": "structure",
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://oauth2.googleapis.com/token",
        "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
        "client_x509_cert_url": "<client_x509_cert_url>",
        "projects": [
            {
                "project-name": "<project-name>",
                "project-id": "<project-id>",
                "users": [
                    {
                        "name": "<service-account-name>",
                        "type": "service_account",
                        "private_key_id": "<private-key-id>",
                        "private_key": "<private-key>",
                        "client_email": "<client-email>",
                        "client_id": "<client-id>"
                    }
                ]
            }
        ]
    }

We should be able to have "private_key_path": "<private-key>" instead of "private_key": "<private-key>" to support the second option

is there a way to run compliance on single test case out of a testSet instead of running them all?

I want to run compliance test for only testcase with "masterTestId": "TEST_NETWORK_SECURITY_GROUP_1" from https://github.com/prancer-io/prancer-compliance-test/blob/master/azure/terraform/master-compliance-test.json
and don't want to test others on purpose.
Thing is when i ran prancer compliance scenario-terraform-azure it basically running the compliance on all the test cases available in a testSet under master-compliance-test.json which is time consuming during development for each fix and test an individual test case.
Note: i saw a --test parameter in prancer but not sure is it the right parameter to test an individual testcase out of a testSet
i tried running "prancer --test TEST_NETWORK_SECURITY_GROUP_1 --compliance scenario-terraform-azure" but seems its not working.

Any idea if this feature available. if available any doc reference would be helpful.

Can not access to snapshot type in related functions

As I try to add kubernetes type for post deployment in prancer, I found we are not able to get snapshots type from related function.
file master_snapshot.py line 104:

snapshot_data = mastersnapshot_fns[snapshot_type](mastersnapshot)

file snapshot.py line 86:

snapshot_data = snapshot_fns[snapshot_type](snapshot, container)

None of mastersnapshot,snapshot and container params include snapshot type which declared in snapshot configuration or master snapshot configuration file.
Actually should pass snapshot type as param too, So there is no need to find it from fields as it exist like below example in file snapshot_google.py line 408:

if 'masterSnapshotId' in node: snapshot_data[node['snapshotId']] = node['masterSnapshotId'] else: snapshot_data[node['snapshotId']] = False if error_str else True else: node['status'] = 'inactive' elif 'masterSnapshotId' in node: data = get_all_nodes

prancer version : 1.0.39

Run Test getting fails with google connector

When I run the Test with google connector then its getting fail.

I used this google API for get single instance details:
https://cloud.google.com/compute/docs/reference/rest/v1/instances/get

Here are the logs:

INFO
apicontroller:134
-Input: {'container': 'container-gcs', 'secrets': {}}

CRITICAL
snapshot:116
-SNAPSHOTS: Populate snapshots for 'container-gcs' container from the database.

INFO
snapshot:163
-Number of Snapshot Documents: 1

INFO
snapshot:211
-Starting to get list of snapshots from database

INFO
snapshot:217
-Number of test Documents: 1

INFO
snapshot_google:50
-Number of Google structure Documents: 1

INFO
snapshot_google:196
-{'snapshotId': '1', 'type': 'instances', 'collection': 'instances', 'path': '/compute/v1/projects/<project_name>/zones/asia-southeast1-b/instances/<instance_name>'}

Here I replaced the original project name and instance name with <project_name> and <instance_name>

INFO
snapshot_google:67
-Get node's kwargs

INFO
snapshot_google:78
-Number of Google Params versions: 0

INFO
snapshot_google:151
-Compute function exception: Missing required parameter "instance"

INFO
snapshot:72
-Snapshot: {'1': False}

INFO
validation:151
-Number of test Documents: 1

INFO
validation:35
-Number of Snapshot Documents: 1

INFO
interpreter:209
-###########################################################################

INFO
interpreter:210
-Actual Rule: exist({1}.name)

INFO
interpreter:219
-**************************************************

INFO
rule_interpreter:169
-LHS: False, OP: =, RHS: True

The error is at snapshot_google:151
-Compute function exception: Missing required parameter "instance"

Here is my test JSON files.

  • google-connector.json
{
    "organization" : "company1", 
    "type" : "google", 
    "fileType" : "structure", 
    "organization-unit" : [
        {
            "name" : "ABC", 
            "accounts" : [
                {
                    "account-name" : "Vatsal Thaker", 
                    "account-description" : "Google Cloud Engine details", 
                    "project-id" : "<project_name>", 
                    "account-user" : "[email protected]", 
                    "users" : [
                        {
                            "name" : "Vatsal Thaker", 
                            "gce" : {
                                "type": "service_account",
                                "project_id": "<Project Id>",
                                "private_key_id": "Private Key Id",
                                "private_key": "<Actual Private Key>",
                                "client_email": "<client_email>",
                                "client_id": "<client id>",
                                "auth_uri": "https://accounts.google.com/o/oauth2/auth",
                                "token_uri": "https://oauth2.googleapis.com/token",
                                "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
                                "client_x509_cert_url": "<As in from google.json file>"
                            }, 
                            "project" : "<project_id>", 
                            "zone" : "asia-southeast1-b", 
                            "client" : "Compute"
                        }
                    ]
                }
            ]
        }
    ]
}
  • snapshot8.json
{
    "contentVersion" : "1.0.0.0", 
    "fileType" : "snapshot", 
    "snapshots" : [
        {
            "source" : "google-connector", 
            "type" : "google", 
            "testUser" : "Vatsal Thaker", 
            "project-id" : "<project_id>", 
            "nodes" : [
                {
                    "snapshotId" : "1", 
                    "type" : "instances", 
                    "collection" : "instances", 
                    "path" : "/compute/v1/projects/<project_id>/zones/asia-southeast1-b/instances/<instance_name>"
                }
            ]
        }
    ]
}
  • test8.json
{
    "contentVersion" : "1.0.0.0", 
    "fileType" : "test", 
    "snapshot" : "snapshot8", 
    "testSet" : [
        {
            "testName" : "test-gcs", 
            "version" : "0.1", 
            "cases" : [
                {
                    "testId" : "1", 
                    "rule" : "exist({1}.name)"
                }
            ]
        }
    ], 
    "notification" : [

    ]
}

test file crawler ignor second point

when i use version in test file, like:
version=1.2.3
The crawler does not consider one of the points and the output is displayed as follows :
version=1.23

image

image

Validation test output is not storing in windows workstation

The validation test does not store the test output in windows workstation due to invalid path design or it is statically designed for linux only. it always try to append an extra "./" with the current directory to goto the validation folder. though its working fine in linux based OS.
Having windows support would be nice.

Step to reproduce:
while in the prancer-hello-world directory, run: prancer scenario-pass

You will get below output:

G:\OPA\prancer\source\prancer-hello-world>prancer scenario-pass
←[99m2021-07-07 21:17:58,180 - START: Argument parsing and Run Initialization. Version 1.1.14←[0m
←[99m2021-07-07 21:18:03,903 - Command: 'c:\users\win\appdata\local\programs\python\python39\python.exe C:\Users\WIN\AppData\Local\Programs\Python\Python39\Scripts\prancer scenario-pass'←[0m
←[99m2021-07-07 21:18:03,904 - Using Framework dir: G:\OPA\prancer\source\prancer-hello-world←[0m
←[99m2021-07-07 21:18:03,904 - Args: Namespace(container='scenario-pass', db=0, crawler=False, test=None, compliance=False, customer=None, connector=None, branch=None, file=None, iac=None)←[0m
←[99m2021-07-07 21:18:03,986 - MASTERSNAPSHOTS:←[0m
←[99m2021-07-07 21:18:03,986 -  Collection: scenario-pass,  Type: FILESYSTEM←[0m
←[99m2021-07-07 21:18:03,987 -   LOCATION: G:\OPA\prancer\source\prancer-hello-world/./validation/←[0m
←[99m2021-07-07 21:18:04,010 -   COLLECTION: G:\OPA\prancer\source\prancer-hello-world/./validation/scenario-pass←[0m
←[91m2021-07-07 21:18:04,032 - No mastersnapshot files in G:\OPA\prancer\source\prancer-hello-world/./validation/scenario-pass, exiting!...←[0m
←[99m2021-07-07 21:18:04,032 - SNAPSHOTS:←[0m
←[99m2021-07-07 21:18:04,033 -  Collection: scenario-pass,  Type: FILESYSTEM←[0m
←[99m2021-07-07 21:18:04,034 -   LOCATION: G:\OPA\prancer\source\prancer-hello-world/./validation/←[0m
←[99m2021-07-07 21:18:04,034 -   COLLECTION: G:\OPA\prancer\source\prancer-hello-world/./validation/scenario-pass←[0m
←[99m2021-07-07 21:18:04,072 - ←[0m
←[99m2021-07-07 21:18:04,073 -  SNAPSHOT:G:\OPA\prancer\source\prancer-hello-world/./validation/scenario-pass\snapshot.json←[0m
←[91m2021-07-07 21:18:04,073 - No testcase document found for scenario-pass\snapshot.json ←[0m
←[99m2021-07-07 21:18:04,073 - SNAPSHOTS COMPLETE:←[0m
←[99m2021-07-07 21:18:04,077 - END: Completed the run and cleaning up.←[0m
←[99m2021-07-07 21:18:04,081 - ←[92m Run Stats: {
  "start": "2021-07-07 21:18:03",
  "end": "2021-07-07 21:18:04",
  "errors": [],
  "host": "MTK",
  "timestamp": "2021-07-07 21:18:03",
  "jsonsource": false,
  "database": 0,
  "singletest": false,
  "log": "G:\\OPA\\prancer\\source\\prancer-hello-world/log/20210707-211758.log",
  "duration": "0 seconds"
}←[00m←[0m

Check the validation\scenario-pass directory and output is missing from there as well as missing from console.

The ability to support multiple AWS query types in one connector

Background

in prancer validation framework, to connect to an AWS backend, we create the AWS connector:

{
    "organization": "company1",
    "type": "aws",
    "fileType": "structure",
    "organization-unit": [
        {
            "name": "abc",
            "accounts": [
                {
                    "account-name": "acc-name",
                    "account-description": "AWS cloud details",
                    "account-id": "<id>",
                    "account-user": "<user>",
                    "users": [
                        {
                            "name": "<user>",
                            "access-key": "<key>",
                            "secret-access": "",
                            "region":"us-west-2",
                            "client": "EC2"
                        }
                    ]
                }
            ]
        }
    ]
}

in the client section we define the type of the query we want to pass to the AWS backend.

Problem

The issue is if we want to use other form of AWS queries (i.e. RDS), we need to create a new connector.

Solution

This setting should be available at the snapshot level rather than the connector level. By doing that, we will be able to reuse a single connector type for different queries.

master test file

We should be able to put master test file in each folder container to run tests against all the resources of the same type.
the structure of the master test file is as follows:

    "$schema": "",
    "contentVersion": "1.0.0.0",
    "fileType": "mastertest",
    "notification": [],
    "masterSnapshot": "snapshot3",
    "testSet": [
        {
            "masterTestName": "test3",
            "version": "0.1",
            "cases": [
                {
                    "masterTestId": "1",
                    "rule":"exist({<resource-type-id>}.location)"
                },
                {
                    "masterTestId": "2",
                    "rule":"{<resource-type-id>}.location='eastus2'"
                },
                {
                    "masterTestId": "3",
                    "rule": "exist({<resource-type-id>}.properties.addressSpace.addressPrefixes[])"
                },
                {
                    "masterTestId": "4",
                    "rule": "count({<resource-type-id>}.properties.dhcpOptions.dnsServers[])=2"
                },
                {
                    "masterTestId": "5",
                    "rule": "{<resource-type-id>}.properties.subnets['name'='abc-nprod-dev-eastus2-Subnet1'].properties.addressPrefix='192.23.26.0/24'"
                },
                {
                    "masterTestId": "6",
                    "rule": "{<resource-type-id>}.tags.COST_LOCATION={<resource-type-id>}.tags.COST_LOCATION"
                }
            ]
        }
    ]
}

In the rules section, instead of putting the snapshot configuration id, we put the master snapshot configuration id which represents a resource type (all virtual machines as an example) rather than an individual resource

problem in terraform processor for multiple blocks with the same name

Prancer terraform processor cannot fully attributes as block:
handle https://www.terraform.io/docs/language/attr-as-blocks.html

Because prancer engine converts terraform code to JSON, when we have multiple blocks of code with the same name, prancer overwrites the value of the nodes and keeps the last one.
an example of the terraform code with attributes as code:

https://github.com/prancer-io/prancer-terramerra/blob/8e1986c6f228dc84b32d7d8dd522690ed18d20e6/aws/sg/extrasg.tf#L110
https://github.com/prancer-io/prancer-terramerra/blob/8e1986c6f228dc84b32d7d8dd522690ed18d20e6/aws/sg/extrasg.tf#L126

suggested solutions:
When generating the JSON nodes, if the node contains a value, convert the node to a list and add a new item to the list

master snapshot configuration file

master snapshot configuration file represents a resource type (for example all VMs in a cloud) rather than an individual resource.
here is the structure of the master snapshot configuration file for Azure cloud:

    "$schema": "",
    "contentVersion": "1.0.0.0",
    "fileType":"masterSnapshot",
    "snapshots": [
        {
            "source": "azureStructure1",
            "type" : "azure",
            "testUser" : "[email protected]",
            "subscriptionId" : ["d34d6141-7a19-4458-b0dd-f038bb7760c1"],
            "nodes": [
                {
                    "masterSnapshotId": "31",
                    "type": "Microsoft.Compute/availabilitySets",
                    "collection": "Microsoft.Compute"
                },
                {
                    "masterSnapshotId": "32",
                    "type": "Microsoft.Network/virtualNetworks",
                    "collection": "Microsoft.Network"
                }
            ]
        }
    ]
}

we give each item in the nodes section a master snapshot configuration id and type of resource

here is an example for the git:

    "$schema": "",
    "contentVersion": "1.0.0.0",
    "fileType":"masterSnapshot",
    "snapshots": [
        {
            "source": "parameterStructure2",
            "type": "git",
            "testUser": "[email protected]",
            "nodes": [
                {
                    "masterSnapshotId": "11",
                    "type": "",
                    "collection": "customParameters",
                    "path":"parameters/ecc/nprod/eastus2/dev/"
                }
            ]
        }
    ]
}

it means all the files in the parameters/ecc/nprod/eastus2/dev/ folder belongs to this master snapshot configuration file

Allow to get snapshot from File System.

Background

Currently we only read snapshots from Azure, Google, AWS, and Git. Moreover, we only read JSON files right now. Some systems export their configuration using YAML.
Currently in our connector there is not way to define a local directory:

{
    "fileType": "structure",
    "companyName": "Prancer",
    "gitProvider": "https://bitbucket.org/ajeybk/mytestpub.git",
    "repoCloneAddress":"/tmp/m",
    "branchName":"master",
    "username":"[email protected]"
}

Problem

There is no way to access files that are available on the same system. We only read files that are available on a repository.

Solution

To enhance the existing git connector to read files from the file system. We will update the gitConnector to now have a new field "type" whose value will be set to "filesystem".
So a gitConnector will now look like:

{
    "fileType": "structure",
    "type" : "filesystem",
    "companyName": "Prancer Git Start",
    "gitProvider": "https://bitbucket.org/patidarpraveen244/prancertest.git",
    "branchName": "master",
    "private": false
}

We will add another parameter "folderPath" that could be defined for the path of the folder. The resultant local file connector should look like:

{
  "fileType": "structure",
  "type": "filesystem",
  "companyName": "prancer-test",
  "folderPath": "/tmp/mytestpub",
  "username": "patidarpraveen"
}

We should also provide the ability to read yaml files, based on the type of file in our snapshot.

{
  "snapshotId": "201",
  "type": "yaml",
  "collection": "FileSnapshot1",
  "path": "devops/gitres/vn/tests.yaml",
  "status": "active"
}

Feature Request: Individual snapshot file name should consist of {snapshotId}_{masterTestId}

I was just trying to check the compliance of available snapshots from generated master snapshot (master-snapshot_gen.json) for terraform azure and did ran prancer compliance scenario-terraform-azure from prancer-hello-world app.

in the snapshots folder i saw lots of snapshot file for individual test case with some confusing name (snapshotId from master-snapshot_gen.json) and its hard to determine for which test case this snapshot file is generated.

For example after running the compliance test for scenario-terraform-azure you will see a snapshot file scenario-terraform-azure/snapshots/TRF_TEMPLATE_SNAPSHOT33.
its basically terraform resource snapshot of test id TEST_NETWORK_SECURITY_GROUP_1 according to terraform /master-compliance-test

It would be nice if we can set some meaningful name to the sanpshot file. e.g. {snapshotId}_{masterTestId}.
more simply if we can rename the snapshot file name from TRF_TEMPLATE_SNAPSHOT33 to TRF_TEMPLATE_SNAPSHOT33_TEST_NETWORK_SECURITY_GROUP_1 it would make life so easier.

Reading Yaml File

Currently we only read JSON files. Allow the ability to also read Yaml Files.

httpspassword in filesystem connector should be able to read from env variable

in filesystem connector for https git connectivity, we have to provide the password in connector file:

{
    "fileType": "structure",
    "type":"filesystem",
    "companyName": "prancer-test",
    "gitProvider": "https://github.com/prancer-io/prancer-compliance-test.git",
    "branchName":"master",
    "httpsUser": "farchide",
    "httpsPassword": "password",
    "private": true
}

in general, every time the framework read the secret/password from connector/config.ini , if it is not available should check the env variable and then vault.

on screen output of running prancer basic

When running prancer basic from the command line, there are certain items shown on the screen.
We should make sure these items are descriptive enough for users running the platform.

Here are the suggestions for different results. This information should be shown all the time regardless of the log level:

When the test is passed:

TEST_ID: 
SNAPSHOT_ID: 
PATHS: 
TITLE: 
RULE:
RESULT: PASSED

When the result is failed:

TEST_ID: 
SNAPSHOT_ID: 
PATHS: 
TITLE: 
DESCRIPTION:
RULE:
ERROR:
REMEDIATION:
RESULT: FAILED

When the test is skipped (empty):

TEST_ID: 
SNAPSHOT_ID: 
PATHS: 
RESULT: SKIPPED

Can not access to snapshot type in related functions

As I try to add kubernetes type for post deployment in prancer, I found we are not able to get snapshots type from related function.
file master_snapshot.py line 104:

snapshot_data = mastersnapshot_fns[snapshot_type](mastersnapshot)

file snapshot.py line 86:

snapshot_data = snapshot_fns[snapshot_type](snapshot, container)

None of mastersnapshot,snapshot and container params include snapshot type which declared in snapshot configuration or master snapshot configuration file.
Actually should pass snapshot type as param too, So there is no need to find it from fields as it exist like below example in file snapshot_google.py line 408:

if 'masterSnapshotId' in node: snapshot_data[node['snapshotId']] = node['masterSnapshotId'] else: snapshot_data[node['snapshotId']] = False if error_str else True else: node['status'] = 'inactive' elif 'masterSnapshotId' in node: data = get_all_nodes

prancer version : 1.0.39

JSON decode error : UTF-8 BOM

"json_from_file".json.decoder.JSONDecodeError: Unexpected UTF-8 BOM (decode using utf-8-sig)" exception reading json file.

consistencies between the sections in config.ini

[AZURE]
api = realm/azureApiVersions.json
azureStructureFolder = realm/

[GOOGLE]
params = realm/googleParamsVersions.json

[GIT]
parameterStructureFolder = realm/

all the cloud providers should have consistent variables.

  1. structure
  2. other cloud specific items

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.