Giter Club home page Giter Club logo

vals's Introduction

vals

vals is a tool for managing configuration values and secrets.

It supports various backends including:

  • Vault

  • AWS SSM Parameter Store

  • AWS Secrets Manager

  • AWS S3

  • GCP Secrets Manager

  • GCP KMS

  • Google Sheets

  • SOPS-encrypted files

  • Terraform State

  • 1Password

  • 1Password Connect

  • Doppler

  • CredHub(Coming soon)

  • Pulumi State

  • Kubernetes

  • Conjur

  • HCP Vault Secrets

  • Bitwarden

  • HTTP JSON

  • Use vals eval -f refs.yaml to replace all the refs in the file to actual values and secrets.

  • Use vals exec -f env.yaml -- <COMMAND> to populate envvars and execute the command.

  • Use vals env -f env.yaml to render envvars that are consumable by eval or a tool like direnv

ToC:

Usage

CLI

vals is a Helm-like configuration "Values" loader with support for various sources and merge strategies

Usage:
  vals [command]

Available Commands:
  eval          Evaluate a JSON/YAML document and replace any template expressions in it and prints the result
  exec          Populates the environment variables and executes the command
  env           Renders environment variables to be consumed by eval or a tool like direnv
  get           Evaluate a string value passed as the first argument and replace any expressiosn in it and prints the result
  ksdecode      Decode YAML document(s) by converting Secret resources' "data" to "stringData" for use with "vals eval"
  version       Print vals version

Use "vals [command] --help" for more information about a comman

vals has a collection of providers that each an be referred with a URI scheme looks ref+<TYPE>.

For this example, use the Vault provider.

Let's start by writing some secret value to Vault:

$ vault kv put secret/foo mykey=myvalue

Now input the template of your YAML and refer to vals' Vault provider by using ref+vault in the URI scheme:

$ VAULT_TOKEN=yourtoken VAULT_ADDR=http://127.0.0.1:8200/ \
  echo "foo: ref+vault://secret/data/foo?proto=http#/mykey" | vals eval -f -

Voila! vals, replacing every reference to your secret value in Vault, produces the output looks like:

foo: myvalue

Which is equivalent to that of the following shell script:

VAULT_TOKEN=yourtoken  VAULT_ADDR=http://127.0.0.1:8200/ cat <<EOF
foo: $(vault kv get -format json secret/foo | jq -r .data.data.mykey)
EOF

Save the YAML content to x.vals.yaml and running vals eval -f x.vals.yaml does produce output equivalent to the previous one:

foo: myvalue

Helm

Use value references as Helm Chart values, so that you can feed the helm template output to vals -f - for transforming the refs to secrets.

$ helm template mysql-1.3.2.tgz --set mysqlPassword='ref+vault://secret/data/foo#/mykey' | vals ksdecode -o yaml -f - | tee manifests.yaml
apiVersion: v1
kind: Secret
metadata:
  labels:
    app: release-name-mysql
    chart: mysql-1.3.2
    heritage: Tiller
    release: release-name
  name: release-name-mysql
  namespace: default
stringData:
  mysql-password: ref+vault://secret/data/foo#/mykey
  mysql-root-password: vZQmqdGw3z
type: Opaque

This manifest is safe to be committed into your version-control system(GitOps!) as it doesn't contain actual secrets.

When you finally deploy the manifests, run vals eval to replace all the refs to actual secrets:

$ cat manifests.yaml | ~/p/values/bin/vals eval -f - | tee all.yaml
apiVersion: v1
kind: Secret
metadata:
    labels:
        app: release-name-mysql
        chart: mysql-1.3.2
        heritage: Tiller
        release: release-name
    name: release-name-mysql
    namespace: default
stringData:
    mysql-password: myvalue
    mysql-root-password: 0A8V1SER9t
type: Opaque

Finally run kubectl apply to apply manifests:

$ kubectl apply -f all.yaml

This gives you a solid foundation for building a secure CD system as you need to allow access to a secrets store like Vault only from servers or containers that pulls safe manifests and runs deployments.

In other words, you can safely omit access from the CI to the secrets store.

Go

import "github.com/helmfile/vals"

secretsToCache := 256 // how many secrets to keep in LRU cache
runtime, err := vals.New(secretsToCache)
if err != nil {
  return nil, err
}

valsRendered, err := runtime.Eval(map[string]interface{}{
    "inline": map[string]interface{}{
        "foo": "ref+vault://127.0.0.1:8200/mykv/foo?proto=http#/mykey",
        "bar": map[string]interface{}{
            "baz": "ref+vault://127.0.0.1:8200/mykv/foo?proto=http#/mykey",
        },
    },
})

Now, vals contains a map[string]interface{} representation of the below:

cat <<EOF
foo: $(vault read mykv/foo -o json | jq -r .mykey)
  bar:
    baz: $(vault read mykv/foo -o json | jq -r .mykey)
EOF

Expression Syntax

vals finds and replaces every occurrence of ref+BACKEND://PATH[?PARAMS][#FRAGMENT][+] URI-like expression within the string at the value position with the retrieved secret value.

BACKEND is the identifier of one of the supported backends.

PATH is the backend-specific path for the secret to be retried.

PARAMS is key-value pairs where the key and the value are combined using the intermediate "=" character while key-value pairs are combined using "&" characters. It's supposed to be the "query" component of the URI as defined in RFC3986.

FRAGMENT is a path-like expression that is used to extract a single value within the secret. When a fragment is specified, vals parse the secret value denoted by the PATH into a YAML or JSON object, and traverses the object following the fragment, and uses the value at the path as the final secret value. It's supposed to be the "fragment" componet of the URI as defined in RFC3986.

Finally, the optional trailing + is the explicit "end" of the expression. You usually don't need it, as if omitted, it treats anything after ref+ and before the new-line or the end-of-line as an expression to be evaluated. An explicit + is handy when you want to do a simple string interpolation. That is, foo ref+SECRET1+ ref+SECRET2+ bar evaluates to foo SECRET1_VALUE SECRET2_VALUE bar.

Although we mention the RFC for the sake of explanation, PARAMS and FRAGMENT might not be fully RFC-compliant as, under the hood, we use a simple regexp that seemed to work for most of use-cases.

The regexp is defined as DefaultRefRegexp in our code base.

Please see the relevant unit test cases for exactly which patterns are supposed to work with vals.

Supported Backends

Please see pkg/providers for the implementations of all the providers. The package names corresponds to the URI schemes.

Vault

  • ref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&token_file=PATH/TO/FILE&token_env=VAULT_TOKEN&namespace=VAULT_NAMESPACE]#/fieldkey
  • ref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&auth_method=approle&role_id=ce5e571a-f7d4-4c73-93dd-fd6922119839&secret_id=5c9194b9-585e-4539-a865-f45604bd6f56]#/fieldkey
  • ref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&auth_method=kubernetes&role_id=K8S-ROLE
  • address defaults to the value of the VAULT_ADDR envvar.
  • namespace defaults to the value of the VAULT_NAMESPACE envvar.
  • auth_method default to token and can also be set to the value of the VAULT_AUTH_METHOD envar.
  • role_id defaults to the value of the VAULT_ROLE_ID envvar.
  • secret_id defaults to the value of the VAULT_SECRET_ID envvar.
  • version is the specific version of the secret to be obtained. Used when you want to get a previous content of the secret.

Authentication

The auth_method or VAULT_AUTH_METHOD envar configures how vals authenticates to HashiCorp Vault. Currently only these options are supported:

  • approle: it requires you pass on a role_id together with a secret_id.
  • token: you just need creating and passing on a VAULT_TOKEN. If VAULT_TOKEN isn't set, token can be retrieved from VAULT_TOKEN_FILE env or ~/.vault-token file.
  • kubernetes: if you're running inside a Kubernetes cluster, you can use this option. It requires you configure a policy, a Kubernetes role, a service account and a JWT token. The login path can also be set using the environment variable VAULT_KUBERNETES_MOUNT_POINT (default is /kubernetes). You must also set role_id or VAULT_ROLE_ID envar to the Kubernetes role.

Examples:

  • ref+vault://mykv/foo?address=https://vault1.example.com:8200#/bar reads the value for the field bar in the kv foo on Vault listening on https://vault1.example.com with the Vault token read from the envvar VAULT_TOKEN, or the file ~/.vault_token when the envvar is not set
  • ref+vault://mykv/foo?token_env=VAULT_TOKEN_VAULT1&namespace=ns1&address=https://vault1.example.com:8200#/bar reads the value for the field bar from namespace ns1 in the kv foo on Vault listening on https://vault1.example.com with the Vault token read from the envvar VAULT_TOKEN_VAULT1
  • ref+vault://mykv/foo?token_file=~/.vault_token_vault1&address=https://vault1.example.com:8200#/bar reads the value for the field bar in the kv foo on Vault listening on https://vault1.example.com with the Vault token read from the file ~/.vault_token_vault1
  • ref+vault://mykv/foo?role_id=my-kube-role#/bar using the Kubernetes role to log in to Vault

AWS

There are four providers for AWS:

  • SSM Parameter Store
  • Secrets Manager
  • S3
  • KMS

Both provider have support for specifying AWS region and profile via envvars or options:

  • AWS profile can be specified via an option profile=AWS_PROFILE_NAME or envvar AWS_PROFILE
  • AWS region can be specified via an option region=AWS_REGION_NAME or envvar AWS_DEFAULT_REGION

AWS SSM Parameter Store

  • ref+awsssm://PATH/TO/PARAM[?region=REGION&role_arn=ASSUMED_ROLE_ARN]
  • ref+awsssm://PREFIX/TO/PARAMS[?region=REGION&role_arn=ASSUMED_ROLE_ARN&mode=MODE&version=VERSION]#/PATH/TO/PARAM

The first form result in a GetParameter call and result in the reference to be replaced with the value of the parameter.

The second form is handy but fairly complex.

  • If mode is not set, vals uses GetParametersByPath(/PREFIX/TO/PARAMS) caches the result per prefix rather than each single path to reduce number of API calls
  • If mode is singleparam, vals uses GetParameter to obtain the value parameter for key /PREFIX/TO/PARAMS, parse the value as a YAML hash, extract the value at the yaml path PATH.TO.PARAM.
    • When version is set, vals uses GetParameterHistoryPages instead of GetParameter.

For the second form, you can optionally specify recursive=true to enable the recursive option of the GetParametersByPath API.

Let's say you had a number of parameters like:

NAME        VALUE
/foo/bar    {"BAR":"VALUE"}
/foo/bar/a  A
/foo/bar/b  B
  • ref+awsssm://foo/bar and ref+awsssm://foo#/bar results in {"BAR":"VALUE"}
  • ref+awsssm://foo/bar/a, ref+awsssm://foo/bar?#/a, and ref+awsssm://foo?recursive=true#/bar/a results in A
  • ref+awsssm://foo/bar?mode=singleparam#/BAR results in VALUE.

On the other hand,

  • ref+awsssm://foo/bar#/BAR fails because /foo/bar evaluates to {"a":"A","b":"B"}.
  • ref+awsssm://foo?recursive=true#/bar fails because /foo?recursive=true internal evaluates to {"foo":{"a":"A","b":"B"}}

AWS Secrets Manager

  • ref+awssecrets://PATH/TO/SECRET[?region=REGION&role_arn=ASSUMED_ROLE_ARN&version_stage=STAGE&version_id=ID]
  • ref+awssecrets://PATH/TO/SECRET[?region=REGION&role_arn=ASSUMED_ROLE_ARN&version_stage=STAGE&version_id=ID]#/yaml_or_json_key/in/secret
  • ref+awssecrets://ACCOUNT:ARN:secret:/PATH/TO/PARAM[?region=REGION&role_arn=ASSUMED_ROLE_ARN]

The third form allows you to reference a secret in another AWS account (if your cross-account secret permissions are configured).

Examples:

  • ref+awssecrets://myteam/mykey
  • ref+awssecrets://myteam/mydoc#/foo/bar
  • ref+awssecrets://myteam/mykey?region=us-west-2
  • ref+awssecrets://arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:/myteam/mydoc/?region=ap-southeast-2#/secret/key

AWS S3

  • ref+s3://BUCKET/KEY/OF/OBJECT[?region=REGION&profile=AWS_PROFILE&role_arn=ASSUMED_ROLE_ARN&version_id=ID]
  • ref+s3://BUCKET/KEY/OF/OBJECT[?region=REGION&profile=AWS_PROFILE&role_arn=ASSUMED_ROLE_ARN&version_id=ID]#/yaml_or_json_key/in/secret

Examples:

  • ref+s3://mybucket/mykey
  • ref+s3://mybucket/myjsonobj#/foo/bar
  • ref+s3://mybucket/myyamlobj#/foo/bar
  • ref+s3://mybucket/mykey?region=us-west-2
  • ref+s3://mybucket/mykey?profile=prod

AWS KMS

  • ref+awskms://BASE64CIPHERTEXT[?region=REGION&profile=AWS_PROFILE&role_arn=ASSUMED_ROLE_ARN&alg=ENCRYPTION_ALGORITHM&key=KEY_ID&context=URL_ENCODED_JSON]
  • ref+awskms://BASE64CIPHERTEXT[?region=REGION&profile=AWS_PROFILE&role_arn=ASSUMED_ROLE_ARN&alg=ENCRYPTION_ALGORITHM&key=KEY_ID&context=URL_ENCODED_JSON]#/yaml_or_json_key/in/secret

Decrypts the URL-safe base64-encoded ciphertext using AWS KMS. Note that URL-safe base64 encoding is the same as "traditional" base64 encoding, except it uses _ and - in place of / and +, respectively. For example, to get a URL-safe base64-encoded ciphertext using the AWS CLI, you might run

aws kms encrypt \
  --key-id alias/example \
  --plaintext $(echo -n "hello, world" | base64 -w0) \
  --query CiphertextBlob \
  --output text |
  tr '/+' '_-'

Valid values for alg include:

  • SYMMETRIC_DEFAULT (the default)
  • RSAES_OAEP_SHA_1
  • RSAES_OAEP_SHA_256

Valid value formats for key include:

  • A key id 1234abcd-12ab-34cd-56ef-1234567890ab
  • A URL-encoded key id ARN: arn%3Aaws%3Akms%3Aus-east-2%3A111122223333%3Akey%2F1234abcd-12ab-34cd-56ef-1234567890ab
  • A URL-encoded key alias: alias%2FExampleAlias
  • A URL-encoded key alias ARN: arn%3Aaws%3Akms%3Aus-east-2%3A111122223333%3Aalias%2FExampleAlias

For ciphertext encrypted with a symmetric key, the key field may be omitted. For ciphertext encrypted with a key in your own account, a plain key id or alias can be used. If the encryption key is from another AWS account, you must use the key or alias ARN.

Use the context parameter to optionally specify the encryption context used when encrypting the ciphertext. Format it by URL-encoding the JSON representation of the encryption context. For example, if the encryption context is {"foo":"bar","hello":"world"}, then you would represent that as context=%7B%22foo%22%3A%22bar%22%2C%22hello%22%2C%22world%22%7D.

Examples:

  • ref+awskms://AQICAHhy_i8hQoGLOE46PVJyinH...WwHKT0i3H0znHRHwfyC7AGZ8ek=
  • ref+awskms://AQICAHhy...nHRHwfyC7AGZ8ek=#/foo/bar
  • ref+awskms://AQICAHhy...WwHKT0i3AGZ8ek=?context=%7B%22foo%22%3A%22bar%22%2C%22hello%22%2C%22world%22%7D
  • ref+awskms://AQICAVJyinH...WwHKT0iC7AGZ8ek=?alg=RSAES_OAEP_SHA1&key=alias%2FExampleAlias
  • ref+awskms://AQICA...fyC7AGZ8ek=?alg=RSAES_OAEP_SHA256&key=arn%3Aaws%3Akms%3Aus-east-2%3A111122223333%3Akey%2F1234abcd-12ab-34cd-56ef-1234567890ab&context=%7B%22foo%22%3A%22bar%22%2C%22hello%22%2C%22world%22%7D

Google GCS

  • ref+gcs://BUCKET/KEY/OF/OBJECT[?generation=ID]
  • ref+gcs://BUCKET/KEY/OF/OBJECT[?generation=ID]#/yaml_or_json_key/in/secret

Examples:

  • ref+gcs://mybucket/mykey
  • ref+gcs://mybucket/myjsonobj#/foo/bar
  • ref+gcs://mybucket/myyamlobj#/foo/bar
  • ref+gcs://mybucket/mykey?generation=1639567476974625

GCP Secrets Manager

  • ref+gcpsecrets://PROJECT/SECRET[?version=VERSION]
  • ref+gcpsecrets://PROJECT/SECRET[?version=VERSION]#/yaml_or_json_key/in/secret
  • ref+gcpsecrets://PROJECT/SECRET[?version=VERSION][&fallback=valuewhenkeyisnotfound][&optional=true][&trim_nl=true]#/yaml_or_json_key/in/secret

Examples:

  • ref+gcpsecrets://myproject/mysecret
  • ref+gcpsecrets://myproject/mysecret?version=3
  • ref+gcpsecrets://myproject/mysecret?version=3#/yaml_or_json_key/in/secret

NOTE: Got an error like expand gcpsecrets://project/secret-name?version=1: failed to get secret: rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.?

In some cases like you need to use an alternative credentials or project, you'll likely need to set GOOGLE_APPLICATION_CREDENTIALS and/or GCP_PROJECT envvars.

GCP KMS

  • ref+gkms://BASE64CIPHERTEXT?project=myproject&location=global&keyring=mykeyring&crypto_key=mykey
  • ref+gkms://BASE64CIPHERTEXT?project=myproject&location=global&keyring=mykeyring&crypto_key=mykey#/yaml_or_json_key/in/secret

Decrypts the URL-safe base64-encoded ciphertext using GCP KMS. Note that URL-safe base64 encoding is the same as "traditional" base64 encoding, except it uses _ and - in place of / and +, respectively. For example, to get a URL-safe base64-encoded ciphertext using the GCP CLI, you might run

echo test | gcloud kms encrypt \
  --project myproject \
  --location global \
  --keyring mykeyring \
  --key mykey \
  --plaintext-file - \
  --ciphertext-file - \
  | base64 -w0 \
  | tr '/+' '_-'

Google Sheets

  • ref+googlesheets://SPREADSHEET_ID?credentials_file=credentials.json#/KEY

Examples:

  • ref+googlesheets://foobarbaz?credentials_file=credentials.json#/MYENV1 authenticates Google Sheets API using the credentials.json file, retrieve KVs from the sheet wit the spreadsheet ID "foobarbaz", and retrieves the value for the key "MYENV1". The credentials.json can be either a serviceaccount json key file, or client credentials file. In case it's a client credentials file, vals initiates a WebAuth flow and prints the URL. You open the URL with a browser, authenticate yourself there, copy the resulting auth code, input the auth code to vals.

Terraform (tfstate)

  • ref+tfstate://relative/path/to/some.tfstate/RESOURCE_NAME[?aws_profile=AWS_PROFILE]
  • ref+tfstate:///absolute/path/to/some.tfstate/RESOURCE_NAME[?aws_profile=AWS_PROFILE]
  • ref+tfstate://relative/path/to/some.tfstate/RESOURCE_NAME[?az_subscription_id=AZ_SUBSCRIPTION_ID]
  • ref+tfstate:///absolute/path/to/some.tfstate/RESOURCE_NAME[?az_subscription_id=AZ_SUBSCRIPTION_ID]

Options:

aws_profile: If non-empty, vals tries to let tfstate-lookup to use the specified AWS profile defined in the well-known ~/.credentials file. az_subscription_id: If non-empty, vals tries to let tfstate-lookup to use the specified Azure Subscription ID.

Examples:

  • ref+tfstate://path/to/some.tfstate/aws_vpc.main.id
  • ref+tfstate://path/to/some.tfstate/module.mymodule.aws_vpc.main.id
  • ref+tfstate://path/to/some.tfstate/output.OUTPUT_NAME
  • ref+tfstate://path/to/some.tfstate/data.thetype.name.foo.bar

When you're using terraform-aws-vpc to define a module "vpc" resource and you wanted to grab the first vpc ARN created by the module:

$ tfstate-lookup -s ./terraform.tfstate module.vpc.aws_vpc.this[0].arn
arn:aws:ec2:us-east-2:ACCOUNT_ID:vpc/vpc-0cb48a12e4df7ad4c

$ echo 'foo: ref+tfstate://terraform.tfstate/module.vpc.aws_vpc.this[0].arn' | vals eval -f -
foo: arn:aws:ec2:us-east-2:ACCOUNT_ID:vpc/vpc-0cb48a12e4df7ad4c

You can also grab a Terraform output by using output.OUTPUT_NAME like:

$ tfstate-lookup -s ./terraform.tfstate output.mystack_apply

which is equivalent to the following input for vals:

$ echo 'foo: ref+tfstate://terraform.tfstate/output.mystack_apply' | vals eval -f -

Remote backends like S3, GCS and AzureRM blob store are also supported. When a remote backend is used in your terraform workspace, there should be a local file at ./terraform/terraform.tfstate that contains the reference to the backend:

{
    "version": 3,
    "serial": 1,
    "lineage": "f1ad69de-68b8-9fe5-7e87-0cb70d8572c8",
    "backend": {
        "type": "s3",
        "config": {
            "access_key": null,
            "acl": null,
            "assume_role_policy": null,
            "bucket": "yourbucketnname",

Just specify the path to that file, so that vals is able to transparently make the remote state contents available for you.

Terraform in GCS bucket (tfstategs)

  • ref+tfstategs://bucket/path/to/some.tfstate/RESOURCE_NAME

Examples:

  • ref+tfstategs://bucket/path/to/some.tfstate/google_compute_disk.instance.id

It allows to use Terraform state stored in GCS bucket with the direct URL to it. You can try to read the state with command:

$ tfstate-lookup -s gs://bucket-with-terraform-state/terraform.tfstate google_compute_disk.instance.source_image_id
5449927740744213880

which is equivalent to the following input for vals:

$ echo 'foo: ref+tfstategs://bucket-with-terraform-state/terraform.tfstate/google_compute_disk.instance.source_image_id' | vals eval -f -

Terraform in S3 bucket (tfstates3)

  • ref+tfstates3://bucket/path/to/some.tfstate/RESOURCE_NAME

Examples:

  • ref+tfstates3://bucket/path/to/some.tfstate/aws_vpc.main.id

It allows to use Terraform state stored in AWS S3 bucket with the direct URL to it. You can try to read the state with command:

$ tfstate-lookup -s s3://bucket-with-terraform-state/terraform.tfstate module.vpc.aws_vpc.this[0].arn
arn:aws:ec2:us-east-2:ACCOUNT_ID:vpc/vpc-0cb48a12e4df7ad4c

which is equivalent to the following input for vals:

$ echo 'foo: ref+tfstates3://bucket-with-terraform-state/terraform.tfstate/module.vpc.aws_vpc.this[0].arn' | vals eval -f -

Terraform in AzureRM Blob storage (tfstateazurerm)

  • ref+tfstateazurerm://{resource_group_name}/{storage_account_name}/{container_name}/{blob_name}.tfstate/RESOURCE_NAME[?az_subscription_id=SUBSCRIPTION_ID]

Examples:

  • ref+tfstateazurerm://my_rg/my_storage_account/terraform-backend/unique.terraform.tfstate/output.virtual_network.name
  • ref+tfstateazurerm://my_rg/my_storage_account/terraform-backend/unique.terraform.tfstate/output.virtual_network.name?az_subscription_id=abcd-efgh-ijlk-mnop

It allows to use Terraform state stored in Azure Blob storage given the resource group, storage account, container name and blob name. You can try to read the state with command:

$ tfstate-lookup -s azurerm://my_rg/my_storage_account/terraform-backend/unique.terraform.tfstate output.virtual_network.name

which is equivalent to the following input for vals:

$ echo 'foo: ref+tfstateazurerm://my_rg/my_storage_account/terraform-backend/unique.terraform.tfstate/output.virtual_network.name' | vals eval -f -

Terraform in Terraform Cloud / Terraform Enterprise (tfstateremote)

  • ref+tfstateremote://app.terraform.io/{org}/{myworkspace}/RESOURCE_NAME

Examples:

  • ref+tfstateremote://app.terraform.io/myorg/myworkspace/output.virtual_network.name

It allows to use Terraform state stored in Terraform Cloud / Terraform Enterprise given the resource group, the organization and the workspace. You can try to read the state with command (with exported variable TFE_TOKEN):

$ tfstate-lookup -s remote://app.terraform.io/myorg/myworkspace output.virtual_network.name

which is equivalent to the following input for vals:

$ echo 'foo: ref+tfstateremote://app.terraform.io/myorg/myworkspace/output.virtual_network.name' | vals eval -f -

SOPS

  • The whole content of a SOPS-encrypted file: ref+sops://base64_data_or_path_to_file?key_type=[filepath|base64]&format=[binary|dotenv|yaml]
  • The value for the specific path in an encrypted YAML/JSON document: ref+sops://base64_data_or_path_to_file#/json_or_yaml_key/in/the_encrypted_doc

Note: When using an inline base64-encoded sops "file", be sure to use URL-safe Base64 encoding. URL-safe base64 encoding is the same as "traditional" base64 encoding, except it uses _ and - in place of / and +, respectively. For example, you might use the following command: sops -e <(echo "foo") | base64 -w0 | tr '/+' '_-'

Examples:

  • ref+sops://path/to/file reads path/to/file as binary input
  • ref+sops://<base64>?key_type=base64 reads <base64> as the base64-encoded data to be decrypted by sops as binary
  • ref+sops://path/to/file#/foo/bar reads path/to/file as a yaml file and returns the value at foo.bar.
  • ref+sops://path/to/file?format=json#/foo/bar reads path/to/file as a json file and returns the value at foo.bar.

Echo

Echo provider echoes the string for testing purpose. Please read the original proposal to get why we might need this.

  • ref+echo://KEY1/KEY2/VALUE[#/path/to/the/value]

Examples:

  • ref+echo://foo/bar generates foo/bar
  • ref+echo://foo/bar/baz#/foo/bar generates baz. This works by the host and the path part foo/bar/baz generating an object {"foo":{"bar":"baz"}} and the fragment part #/foo/bar results in digging the object to obtain the value at $.foo.bar.

File

File provider reads a local text file, or the value for the specific path in a YAML/JSON file.

  • ref+file://relative/path/to/file[#/path/to/the/value]
  • ref+file:///absolute/path/to/file[#/path/to/the/value]

Examples:

  • ref+file://foo/bar loads the file at foo/bar
  • ref+file:///home/foo/bar loads the file at /home/foo/bar
  • ref+file://foo/bar?encode=base64 loads the file at foo/bar and encodes its content to a base64 string
  • ref+file://some.yaml#/foo/bar loads the YAML file at some.yaml and reads the value for the path $.foo.bar. Let's say some.yaml contains {"foo":{"bar":"BAR"}}, key1: ref+file://some.yaml#/foo/bar results in key1: BAR.

Azure Key Vault

Retrieve secrets from Azure Key Vault. Path is used to specify the vault and secret name. Optionally a specific secret version can be retrieved.

  • ref+azurekeyvault://VAULT-NAME/SECRET-NAME[/VERSION]

VAULT-NAME is either a simple name if operating in AzureCloud (vault.azure.net) or the full endpoint dns name when operating against non-default azure clouds (US Gov Cloud, China Cloud, German Cloud). Examples:

  • ref+azurekeyvault://my-vault/secret-a
  • ref+azurekeyvault://my-vault/secret-a/ba4f196b15f644cd9e949896a21bab0d
  • ref+azurekeyvault://gov-cloud-test.vault.usgovcloudapi.net/secret-b

Authentication

Vals aquires Azure credentials though Azure CLI or from environment variables. The easiest way is to run az login. Vals can then aquire the current credentials from az without further set up.

Other authentication methods require information to be passed in environment variables. See Azure SDK docs and auth.go for the full list of supported environment variables.

For example, if using client credentials the required env vars are AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID and possibly AZURE_ENVIRONMENT in case of accessing an Azure GovCloud.

The order in which authentication methods are checked is:

  1. Client credentials
  2. Client certificate
  3. Username/Password
  4. Azure CLI or Managed identity (set environment AZURE_USE_MSI=true to enabled MSI)

EnvSubst

Environment variables substitution.

  • ref+envsubst://$VAR1

Examples:

  • ref+envsubst://$VAR1 loads environment variables $VAR1

GitLab Secrets

For this provider to work you require an access token exported as the environment variable GITLAB_TOKEN.

  • ref+gitlab://my-gitlab-server.com/project_id/secret_name?[ssl_verify=false&scheme=https&api_version=v4]

Examples:

  • ref+gitlab://gitlab.com/11111/password
  • ref+gitlab://my-gitlab.org/11111/password?ssl_verify=true&scheme=https

1Password

For this provider to work a working service account token is required. The following env var has to be configured:

  • OP_SERVICE_ACCOUNT_TOKEN

1Password is organized in vaults and items. An item can have multiple fields with or without a section. Labels can be set on fields and sections. Vaults, items, sections and labels can be accessed by ID or by label/name (and IDs and labels can be mixed and matched in one URL).

If a section does not have a label the field is only accessible via the section ID. This does not hold true for some default fields which may have no section at all (e.g.username and password for a Login item).

See Secret reference syntax for more information.

Caution: vals-expressions are parsed as URIs. For the 1Password provider the host component of the URI identifies the vault. Therefore vaults containing certain characters not allowed in the host component (e.g. whitespaces, see RFC-3986 for details) can only be accessed by ID.

Examples:

  • ref+op://VAULT_NAME/ITEM_NAME/FIELD_NAME
  • ref+op://VAULT_ID/ITEM_NAME/FIELD_NAME
  • ref+op://VAULT_NAME/ITEM_NAME/[SECTION_NAME/]FIELD_NAME

1Password Connect

For this provider to work you require a working and accessible 1Password connect server. The following env vars have to be configured:

  • OP_CONNECT_HOST
  • OP_CONNET_TOKEN

1Password is organized in vaults and items. An item can have multiple fields with or without a section. Labels can be set on fields and sections. Vaults, items, sections and labels can be accessed by ID or by label/name (and IDs and labels can be mixed and matched in one URL).

If a section does not have a label the field is only accessible via the section ID. This does not hold true for some default fields which may have no section at all (e.g.username and password for a Login item).

Caution: vals-expressions are parsed as URIs. For the 1Password connect provider the host component of the URI identifies the vault (by ID or name). Therefore vaults containing certain characters not allowed in the host component (e.g. whitespaces, see RFC-3986 for details) can only be accessed by ID.

Examples:

  • ref+onepasswordconnect://VAULT_ID/ITEM_ID#/[SECTION_ID.]FIELD_ID
  • ref+onepasswordconnect://VAULT_LABEL/ITEM_LABEL#/[SECTION_LABEL.]FIELD_LABEL
  • ref+onepasswordconnect://VAULT_LABEL/ITEM_ID#/[SECTION_LABEL.]FIELD_ID

Doppler

  • ref+doppler://PROJECT/ENVIRONMENT/SECRET_KEY[?token=dp.XX.XXXXXX&address=https://api.doppler.com&no_verify_tls=false&include_doppler_defaults=false]
  • PROJECT can be absent if the Token is a Service Token for that project. It can be set via DOPPLER_PROJECT envvar. See Doppler docs for more information.
  • ENVIRONMENT (aka: "Config") can be absent if the Token is a Service Token for that project. It can be set via DOPPLER_ENVIRONMENT envvar. See Doppler docs for more information.
  • SECRET_KEY can be absent and it will fetch all secrets for the project/environment.
  • token defaults to the value of the DOPPLER_TOKEN envvar.
  • address defaults to the value of the DOPPLER_API_ADDR envvar, if unset: https://api.doppler.com.
  • no_verify_tls default false.
  • include_doppler_defaults defaults to false, if set to true it will include the Doppler defaults for the project/environment (DOPPLER_ENVIRONMENT, DOPPLER_PROJECT and DOPPLER_CONFIG). It only works when SECRET_KEY is absent.

Examples:

(DOPPLER_TOKEN set as environment variable)

  • ref+doppler://// fetches all secrets for the project/environment when using a Service Token.
  • ref+doppler:////FOO fetches the value of secret with name FOO for the project/environment when using a Service Token.
  • ref+doppler://#FOO fetches the value of secret with name FOO for the project/environment when using a Service Token.
  • ref+doppler://MyProject/development/DB_PASSWORD fetches the value of secret with name DB_PASSWORD for the project named MyProject and environment named development.
  • ref+doppler://MyProject/development/#DB_PASSWORD fetches the value of secret with name DB_PASSWORD for the project named MyProject and environment named development.

Pulumi State

Obtain value in state pulled from Pulumi Cloud REST API:

  • ref+pulumistateapi://RESOURCE_TYPE/RESOURCE_LOGICAL_NAME/ATTRIBUTE_TYPE/ATTRIBUTE_KEY_PATH?project=PROJECT&stack=STACK
  • RESOURCE_TYPE is a Pulumi resource type of the form <package>:<module>:<type>, where forward slashes (/) are replaced by a double underscore (__) and colons (:) are replaced by a single underscore (_). For example aws:s3:Bucket would be encoded as aws__s3__Bucket and kubernetes:storage.k8s.io/v1:StorageClass would be encoded as kubernetes_storage.k8s.io__v1_StorageClass.
  • RESOURCE_LOGICAL_NAME is the logical name of the resource in the Pulumi program.
  • ATTRIBUTE_TYPE is either outputs or inputs.
  • ATTRIBUTE_KEY_PATH is a GJSON expression that selects the desired attribute from the resource's inputs or outputs per the chosen ATTRIBUTE_TYPE value. You must encode any characters that would otherwise not comply with URI syntax, for example # becomes %23.
  • project is the Pulumi project name. May also be provided via the PULUMI_PROJECT environment variable.
  • stack is the Pulumi stack name. May also be provided via the PULUMI_STACK environment variable.

Environment variables:

  • PULUMI_API_ENDPOINT_URL is the Pulumi API endpoint URL. Defaults to https://api.pulumi.com. You may also provide this as the pulumi_api_endpoint_url query parameter.
  • PULUMI_ACCESS_TOKEN is the Pulumi access token to use for authentication.
  • PULUMI_ORGANIZATION is the Pulumi organization to use for authentication. You may also provide this as an organization query parameter.
  • PULUMI_PROJECT is the Pulumi project. You may also provide this as a project query parameter.
  • PULUMI_STACK is the Pulumi stack. You may also provide this as a stack query parameter.

Examples:

  • ref+pulumistateapi://aws-native_s3_Bucket/my-bucket/outputs/bucketName?project=my-project&stack=my-stack
  • ref+pulumistateapi://aws-native_s3_Bucket/my-bucket/outputs/tags.%23(key==SomeKey).value?project=my-project&stack=my-stack
  • ref+pulumistateapi://kubernetes_storage.k8s.io__v1_StorageClass/gp2-encrypted/inputs/metadata.name?project=my-project&stack=my-stack

Kubernetes

Fetch value from Kubernetes:

  • ref+k8s://API_VERSION/KIND/NAMESPACE/NAME/KEY[?kubeConfigPath=<path_to_kubeconfig>&kubeContext=<kubernetes context name>]

Authentication to the Kubernetes cluster is done by referencing the local kubeconfig file. The path to the kubeconfig can be specified as a URI parameter, read from the KUBECONFIG environment variable or the provider will attempt to read $HOME/.kube/config. The Kubernetes context can be specified as a URI parameteter.

Environment variables:

  • KUBECONFIG contains the path to the Kubeconfig that will be used to fetch the secret.

Examples:

  • ref+k8s://v1/Secret/mynamespace/mysecret/foo
  • ref+k8s://v1/ConfigMap/mynamespace/myconfigmap/foo
  • ref+k8s://v1/Secret/mynamespace/mysecret/bar?kubeConfigPath=/home/user/kubeconfig
  • secretref+k8s://v1/Secret/mynamespace/mysecret/baz
  • secretref+k8s://v1/Secret/mynamespace/mysecret/baz?kubeContext=minikube

NOTE: This provider only supports kind "Secret" or "ConfigMap" in apiVersion "v1" at this time.

Conjur

This provider retrieves the value of secrets stored in Conjur. It's based on the https://github.com/cyberark/conjur-api-go lib.

The following env vars have to be configured:

  • CONJUR_APPLIANCE_URL

  • CONJUR_ACCOUNT

  • CONJUR_AUTHN_LOGIN

  • CONJUR_AUTHN_API_KEY

  • ref+conjur://PATH/TO/VARIABLE[?address=CONJUR_APPLIANCE_URL&account=CONJUR_ACCOUNT&login=CONJUR_AUTHN_LOGIN&apikey=CONJUR_AUTHN_API_KEY]/CONJUR_SECRET_ID

Example:

  • ref+conjur://branch/variable_name

HCP Vault Secrets

This provider retrieves the value of secrets stored in HCP Vault Secrets.

It is based on the HashiCorp Cloud Platform Go SDK lib.

Environment variables:

  • HCP_CLIENT_ID: The service principal Client ID for the HashiCorp Cloud Platform.
  • HCP_CLIENT_SECRET: The service principal Client Secret for the HashiCorp Cloud Platform.
  • HCP_ORGANIZATION_ID: (Optional) The organization ID for the HashiCorp Cloud Platform. It can be omitted. If "Organization Name" is set, it will be used to fetch the organization ID, otherwise the organization ID will be set to the first organization ID found.
  • HCP_ORGANIZATION_NAME: (Optional) The organization name for the HashiCorp Cloud Platform to fetch the organization ID.
  • HCP_PROJECT_ID: (Optional) The project ID for the HashiCorp Cloud Platform. It can be omitted. If "Project Name" is set, it will be used to fetch the project ID, otherwise the project ID will be set to the first project ID found in the provided organization.
  • HCP_PROJECT_NAME: (Optional) The project name for the HashiCorp Cloud Platform to fetch the project ID.

Parameters:

Parameters are optional and can be passed as query parameters in the URI, taking precedence over environment variables.

  • client_id: The service principal Client ID for the HashiCorp Cloud Platform.
  • client_secret: The service principal Client Secret for the HashiCorp Cloud Platform.
  • organization_id: The organization ID for the HashiCorp Cloud Platform. It can be omitted. If "Organization Name" is set, it will be used to fetch the organization ID, otherwise the organization ID will be set to the first organization ID found.
  • organization_name: The organization name for the HashiCorp Cloud Platform to fetch the organization ID.
  • project_id: The project ID for the HashiCorp Cloud Platform. It can be omitted. If "Project Name" is set, it will be used to fetch the project ID, otherwise the project ID will be set to the first project ID found in the provided organization.
  • project_name: The project name for the HashiCorp Cloud Platform to fetch the project ID.
  • version: The version digit of the secret to fetch. If omitted or fail to parse, the latest version will be fetched.

Example:

ref+hcpvaultsecrets://APPLICATION_NAME/SECRET_NAME[?client_id=HCP_CLIENT_ID&client_secret=HCP_CLIENT_SECRET&organization_id=HCP_ORGANIZATION_ID&organization_name=HCP_ORGANIZATION_NAME&project_id=HCP_PROJECT_ID&project_name=HCP_PROJECT_NAME&version=2]

Bitwarden

This provider retrieves the secrets stored in Bitwarden. It uses the Bitwarden Vault-Management API that is included in the Bitwarden CLI by executing bw serve.

Environment variables:

Parameters:

Parameters are optional and can be passed as query parameters in the URI, taking precedence over environment variables.

  • address defaults to the value of the BW_API_ADDR envvar.

Examples:

  • ref+bw://4d084b01-87e7-4411-8de9-2476ab9f3f48 gets the password of the item id
  • ref+bw://4d084b01-87e7-4411-8de9-2476ab9f3f48/password gets the password of the item id
  • ref+bw://4d084b01-87e7-4411-8de9-2476ab9f3f48/{username,password,uri,notes,item} gets username, password, uri, notes or the whole item of the given item id
  • ref+bw://4d084b01-87e7-4411-8de9-2476ab9f3f48/notes#/key1 gets the key1 from the yaml stored as note in the item

HTTP JSON

This provider retrieves values stored in JSON hosted by a HTTP frontend.

This provider is built on top of jsonquery and xpath packages.

Given the diverse array of JSON structures that can be encountered, utilizing jsonquery with XPath presents a more effective approach for handling this variability in data structures.

This provider requires an xpath to be provided.

Do not include the protocol scheme i.e. http/https. Provider defaults to scheme https (http is available, see below)

Examples:

Fetch string value

ref+httpjson://<domain>/<path>?[insecure=false&floatAsInt=false]#/<xpath>

Let's say you want to fetch the below JSON object from https://api.github.com/users/helmfile/repos:

[
    {
        "name": "chartify"
    },
    {
        "name": "go-yaml"
    }
]
# To get name="chartify" using https protocol you would use:
ref+httpjson://api.github.com/users/helmfile/repos#///*[1]/name

# To get name="go-yaml" using https protocol you would use:
ref+httpjson://api.github.com/users/helmfile/repos#///*[2]/name

# To get name="go-yaml" using http protocol you would use:
ref+httpjson://api.github.com/users/helmfile/repos?insecure=true#///*[2]/

Fetch integer value

ref+httpjson://<domain>/<path>?[insecure=false&floatAsInt=false]#/<xpath>

Let's say you want to fetch the below JSON object from https://api.github.com/users/helmfile/repos:

[
    {
        "id": 251296379
    }
]
# Running the following will return: 2.51296379e+08
ref+httpjson://api.github.com/users/helmfile/repos#///*[1]/id

# Running the following will return: 251296379
ref+httpjson://api.github.com/users/helmfile/repos?floatAsInt=true#///*[1]/id

Advanced Usages

Discriminating config and secrets

vals has an advanced feature that helps you to do GitOps.

GitOps is a good practice that helps you to review how your change would affect the production environment.

To best leverage GitOps, it is important to remove dynamic aspects of your config before reviewing.

On the other hand, vals's primary purpose is to defer retrieval of values until the time of deployment, so that we won't accidentally git-commit secrets. The flip-side of this is, obviously, that you can't review the values themselves.

Using ref+<value uri> and secretref+<value uri> in combination with vals eval --exclude-secret helps it.

By using the secretref+<uri> notation, you tell vals that it is a secret and regular ref+<uri> instances are for config values.

myconfigvalue: ref+awsssm://myconfig/value
mysecretvalue: secretref+awssecrets://mysecret/value

To leverage GitOps most by allowing you to review the content of ref+awsssm://myconfig/value only, you run vals eval --exclude-secret to generate the following:

myconfigvalue: MYCONFIG_VALUE
mysecretvalue: secretref+awssecrets://mysecret/value

This is safe to be committed into git because, as you've told to vals, awsssm://myconfig/value is a config value that can be shared publicly.

Non-Goals

Complex String-Interpolation / Template Functions

In the early days of this project, the original author has investigated if it was a good idea to introduce string interpolation like feature to vals:

foo: xx${{ref "ref+vault://127.0.0.1:8200/mykv/foo?proto=http#/mykey" }}
bar:
  baz: yy${{ref "ref+vault://127.0.0.1:8200/mykv/foo?proto=http#/mykey" }}

But the idea had abandoned due to that it seemed to drive the momentum to vals being a full-fledged YAML templating engine. What if some users started wanting to use vals for transforming values with functions? That's not the business of vals.

Instead, use vals solely for composing sets of values that are then input to another templating engine or data manipulation language like Jsonnet and CUE.

Note though, vals does have support for simple string interpolation like usage. See Expression Syntax for more information.

Merge

Merging YAMLs is out of the scope of vals. There're better alternatives like Jsonnet, Sprig, and CUE for the job.

vals's People

Contributors

abursavich avatar agershman avatar alstephenclaypool avatar aslafy-z avatar bersalazar avatar bulatsaif avatar carnei-ro avatar cw-sakamoto avatar dependabot[bot] avatar dex4er avatar digiserg avatar jkroepke avatar klebediev avatar koenpunt avatar lucasfcnunes avatar mumoshu avatar philomory avatar reegnz avatar sdelano avatar sherifkayad avatar skokhanovskiy avatar smerschjohann avatar stoned avatar terrancej avatar tnaroska avatar travisgroth avatar tuananh avatar vitrez avatar yxxhero avatar zoispag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vals's Issues

ref+vault fails with "no value found for key"

I'm having problems getting vault ref to work properly.

$ vault kv get -field=secretContent "v1/prod/kv/foo"
<redacted>
$ echo "foo: ref+vault://v1/prod/kv/foo#/secretContent" | ./main eval -f -            
expand vault://v1/prod/kv/foo#/secretContent: no value found for key secretContent

proposition: move cache one level up, introduce EvalRenderer struct

Hi!

  1. While I was working on helmfile + vals prototype as a part of helmfile#881
    I got an impression that given interface of vals don't use caching effectively enough: on each new Eval() call cache is recreated.
    So, what if we intorduce EvalRenderer object to vals api and each consecutive call of EvalRenderer.Eval() will use existing cache?

  2. I see caching is implemented for each provider independently. What if we move caching one level up and decom caching for providers?

here is my propotype with proposed changes

[feature] Direct support of referencing tfstate files in s3 buckets

Right now one needs to reference to the local file containing the s3 reference

Remote backends like S3 is also supported. When a remote backend is used in your terraform workspace, there should be a local file at ./terraform/terraform.tfstate that contains the reference to the backend.

Since that file might live in different directories for different developers and might not be available at all in a ci/cd environment it would be nice to be able to refer to the S3 backend directly from with in the reference. As tfstate-lookup also supports direct lookup in S3 buckets like in

tfstate-lookup -s s3://name-of-s3-bucket/path/to/the/tfstatefile aws_route53_record.foo_db_public_record_bar.name

So something like

echo 'foo: ref+s3tfstate://name-of-s3-bucket/path/to/the/tfstatefile#/aws_route53_record.foo_db_public_record_bar.name' | vals eval -f -

would be nice to have. Would it even be possible to add the aws profile that should be used for this? And have vals do the 'magic' of accessing it?

echo 'foo: ref+s3tfstate://name-of-s3-bucket/path/to/the/tfstatefile?profile=name-of-aws-profile#/aws_route53_record.foo_db_public_record_bar.name' | vals eval -f -

panic: runtime error: invalid memory address or nil pointer dereference

I've downloaded the latest release for both tfstate-lookup and val but running val causes a runtime error when running against state stored in an s3 bucket

tfstate-lookup -s .terraform/terraform.tfstate output.access_cert.value

Returns the correct value, authentication works fine with the bucket.

echo 'foo: ref+tfstate://.terraform/terraform.tfstate/output.access_cert.value' | vals eval -f -

Causes a crash:

(โŽˆ |ew1-test-01:default)โžœ  helm git:(master) โœ— echo 'foo: ref+tfstate://.terraform/terraform.tfstate/output.access_cert.value' | vals eval -f -
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1bf8a02]

goroutine 1 [running]:
github.com/fujiwara/tfstate-lookup/tfstate.(*TFState).output(0x0, 0xc000113e9d, 0x18, 0x1db3301, 0xc000627c20, 0x0)
        /home/runner/go/pkg/mod/github.com/fujiwara/[email protected]/tfstate/lookup.go:152 +0xb2
github.com/fujiwara/tfstate-lookup/tfstate.(*TFState).Lookup(0x0, 0xc000113e9d, 0x18, 0x2473fc0, 0xc0002e64c0, 0xc000113e9d)
        /home/runner/go/pkg/mod/github.com/fujiwara/[email protected]/tfstate/lookup.go:159 +0x2e4
github.com/variantdev/vals/pkg/providers/tfstate.(*provider).GetString(0x2d6f480, 0xc000113e80, 0x35, 0x0, 0x0, 0xc000113e00, 0x35)
        /home/runner/work/vals/vals/pkg/providers/tfstate/tfstate.go:33 +0x193
github.com/variantdev/vals.(*Runtime).Eval.func3(0xc0000da004, 0x3f, 0x43, 0xc000113e40, 0x8, 0x8)
        /home/runner/work/vals/vals/vals.go:237 +0xe76
github.com/variantdev/vals/pkg/expansion.(*ExpandRegexMatch).InString(0xc000531870, 0xc0000da000, 0x43, 0xc0003c0830, 0x10, 0x1009e13, 0x1de5760)
        /home/runner/work/vals/vals/pkg/expansion/expand_match.go:40 +0xe1
github.com/variantdev/vals/pkg/expansion.(*ExpandRegexMatch).InMap.func1(0xc0000da000, 0x43, 0xc0003c0830, 0xc0000e87e0, 0x8e00000001d27f20, 0xc0003c0860)
        /home/runner/work/vals/vals/pkg/expansion/expand_match.go:52 +0x42
github.com/variantdev/vals/pkg/expansion.ModifyStringValues(0x1d27f20, 0xc0003c0860, 0xc000531808, 0x1d27f20, 0xc0003c0860, 0xc0000c7500, 0x0)
        /home/runner/work/vals/vals/pkg/expansion/maputil.go:42 +0xe63
github.com/variantdev/vals/pkg/expansion.ModifyStringValues(0x1d9f8c0, 0xc0002d2b70, 0xc000531808, 0xc000531818, 0x100c468, 0x30, 0x1e2eb00)
        /home/runner/work/vals/vals/pkg/expansion/maputil.go:83 +0x2f6
github.com/variantdev/vals/pkg/expansion.(*ExpandRegexMatch).InMap(0xc000531870, 0xc0002d2b70, 0x1c50be5, 0x200, 0x0)
        /home/runner/work/vals/vals/pkg/expansion/expand_match.go:51 +0x66
github.com/variantdev/vals.(*Runtime).Eval(0xc0002d2c60, 0xc0002d2b70, 0xc0002d2c60, 0x0, 0x0)
        /home/runner/work/vals/vals/vals.go:286 +0xcd
github.com/variantdev/vals.Eval(0xc0002d2b70, 0xc0005319e8, 0x1, 0x1, 0x0, 0x0, 0xc000489418)
        /home/runner/work/vals/vals/vals.go:377 +0x8c
main.main()
        /home/runner/work/vals/vals/cmd/vals/main.go:76 +0x1430

Implement quiet flag

Hi,

I'm, currently try to integrate vals into my helm-secret project.

Currently, vals is spammy. There is no way to suppress messages like

https://github.com/variantdev/vals/blob/0c7d70b4c2f400f16f731ee441655908d45da15a/pkg/providers/sops/sops.go#L51

The debug function would pipe anything to stderr
https://github.com/variantdev/vals/blob/0c7d70b4c2f400f16f731ee441655908d45da15a/pkg/providers/sops/sops.go#L73-L75

I could call vals while stderr is redirect to dev null, but in case of an error, the error messages from vals are gone.

Feat: evaluate literal values

Hi,

it would be great, if literal values can be also evaluated:

$ printf 'ref+echo://42' | vals eval
42

Currently, values need to be wrap into a json/yaml document. It's a bit messy to extract the value inside a shell script.

See jkroepke/helm-secrets#266 for a potential use-case.

[Feature reqest] support for VAULT_TOKEN_FILE env variable

if VAULT_TOKEN isn't set, token can be retrieved from ~/.vault-token file. It is default behavior for vault agent.
~/.vault-token is hardcoded in code see L203

I am proposing to allow VAULT_TOKEN_FILE to overwrite the default value.

vals already support custom token_file in URI schemes:

ref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&token_file=PATH/TO/FILE&token_env=VAULT_TOKEN&namespace=VAULT_NAMESPACE]#/fieldkey

Use case

Kubernetes environment

Agent Sidecar Injector can inject token, but path will be /vault/secrets/token not ~/.vault-token.

Setting up the env variable for pod with VAULT_TOKEN_FILE is simple, but to set up VAULT_TOKEN you will need to execute the sh command export VAULT_TOKEN=$(cat /vault/secrets/token) which requiters overwriting the container entrypoint which is not always possible.

How to install?

Hi,

Your vals tool looks exactly like what I need, but I can't figure out how to install it.
I tried to "go build" and it works but I can't find the executable. I'm a total beginner to go. Can you provide an installation guide?
Thank you

latency fetching vault secrets

hello, i've recently been running into a 2 minute latency when fetching vault secrets with vals. Any tips for tracking this down? For example, here's a secret fetched with the vault cli client:

โฏ time vault kv get --field=key secret/stage/yoda/test_secret
ThisIsStoredInVault
vault kv get --field=key secret/stage/yoda/test_secret  0.04s user 0.02s system 34% cpu 0.175 total

and here's the same secret fetched with vals:

โฏ time echo "foo: ref+vault://secret/stage/yoda/test_secret#/key" | vals eval -f -
foo: ThisIsStoredInVault
echo "foo: ref+vault://secret/stage/yoda/test_secret#/key"  0.00s user 0.00s system 37% cpu 0.002 total
vals eval -f -  0.10s user 0.10s system 0% cpu 2:00.45 total

i am using version 0.18.0 of vals.

vals version does not work

Hi,

I'm using vals 0.18.0 from github releases. Running vals version does not report a version.

% vals version
Version: dev
Git Commit: 

Getting AWS SSM parameters fails ValidationException

I can not get it to work to load SSM parameters. I got a SSM parameter with name test-param and a single value. I following documentation without success. What am I doing wrong? tried those

SSM_EXAMPLE: ref+awsssm://test-param[?region=eu-west-2&mode=singleparam]
SSM_EXAMPLE: ref+awsssm://test-param[?region=eu-west-2&mode=singleparam]#/test-param
SSM_EXAMPLE: ref+awsssm://test-param?region=eu-west-2&mode=singleparam

vals eval -f values-default.yaml
> expand awsssm://test-param[?region=eu-west-2&mode=singleparam]: get parameter: ValidationException: Parameter name: can't be prefixed with "ssm" (case-insensitive). If formed as a path, it can consist of sub-paths divided by slash symbol; each sub-path can be formed as a mix of letters, numbers and the following 3 symbols .-_
        status code: 400, request id: 61a77757-efc8-467c-be52-612a8721a324

Found similiar issue but that was closes 2 years ago
#11

AWS SSM fetch deeper parameters to browse by path

SSM provider can not fetch more than 1 level deep parameters.

Context

Let's say AWS Parameter Store contains this structure:

/myteam/myapp/mykey01 = myval1
/myteam/myapp/mykey02 = myval2
/myteam/global/mykey03 = myval3

SSM parameter can be fetch like this:

key1: ref+awsssm://myteam/myapp/mykey01

If we are fetching multiple keys, we can use cache like this to fetch only once:

key1: ref+awsssm://myteam/myapp/#mykey01
key2: ref+awsssm://myteam/myapp/#mykey02

The issue

If we want to cache one level upper, we get an error

key1: ref+awsssm://myteam/#myapp/mykey01
key2: ref+awsssm://myteam/#myapp/mykey02
key3: ref+awsssm://myteam/#global/mykey03

Here the error message:

err: expand awsssm://myteam/mydoc/#foo/bar: ssm: out.Parameters is empty
in ./helmfile.yaml: expand awsssm://myteam/#myapp/mykey01: ssm: out.Parameters is empty

Troubleshoot

The provider is getting multiple parameters using GetParametersByPathInput:
https://github.com/variantdev/vals/blob/master/pkg/providers/ssm/ssm.go#L140

By default, this function returns only 1 level deep parameters.
I suggest to enable to Recursive option on that call.

Workaround

Using mode=singleparam for now with YAML dict in a unique param.

Windows Support

First of all I would like to express that this project is exactly what we need! Thanks for making that available.

Just was wondering why can't I find any Windows executable in the releases? Could you possible enable Windows support?

.vault-token file not being read

$ vault login
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
$ ls -l ~/.vault-token 
-rw------- 1 sirianni sirianni 26 Jun 29 16:06 /home/sirianni/.vault-token
$ echo "foo: ref+vault://foo#/val" | ./main eval -f -
vault: read: key="foo/"
expand vault://foo#/val: Error making API request.

URL: GET https://vault..../v1/foo
Code: 400. Errors:

* missing client token

awssecrets does not work for Secret Ids that do not end in a '/' character

This works:

$ ~/.local/bin/aws secretsmanager get-secret-value --secret-id DanTest/
{
    "Name": "DanTest/", 
    "VersionId": "4853e4d6-d7e8-4a30-9099-89cb8c522099", 
    "SecretString": "{\"foo\":\"bar\"}", 
    "VersionStages": [
        "AWSCURRENT"
    ], 
    "CreatedDate": 1579487877.04, 
    "ARN": "arn:aws:secretsmanager:us-east-1:redacted:secret:DanTest/-w08Bnz"
}

$ cat v.yaml 
secretManagerTest: ref+awssecrets://DanTest?region=us-east-1#/foo

$ ./vals eval -f v.yaml 
awssecrets: successfully retrieved key=DanTest/
secretManagerTest: bar

This does not

$ ~/.local/bin/aws secretsmanager get-secret-value --secret-id DanTest2
{
    "Name": "DanTest2", 
    "VersionId": "87d3aebf-699f-4261-9102-8d4527b47fee", 
    "SecretString": "{\"foo2\":\"bar2\"}", 
    "VersionStages": [
        "AWSCURRENT"
    ], 
    "CreatedDate": 1579488299.236, 
    "ARN": "arn:aws:secretsmanager:us-east-1:redacted:secret:DanTest2-J5yMCu"
}

$ cat bad.yaml 
secretManagerTest: ref+awssecrets://DanTest2?region=us-east-1#/foo2

$ ./vals eval -f bad.yaml 
expand awssecrets://DanTest2?region=us-east-1#/foo2: get parameter: ResourceNotFoundException: Secrets Manager can't find the specified secret.
	status code: 400, request id: c46876ff-5e57-4b95-b492-16b6458d0db3

The only difference is that the one that works has a trailing slash in the name of the secret, which is not a requirement for AWS Secrets. I did look at the code but I found there to be a fair bit of logic around parsing and trimming slashes, it's not clear to me where the bug is.

Can I move this repository to the Helmfile org?

Can we move this repository to https://github.com/helmfile so that not only me but also all Helmfile maintainers can co-maintain this project?
We can generally say that the more hands there are the more the project will be maintained well, right? ๐Ÿค”
Also, to be clear, note that I'm one of maintainers of the Helmfile project and Helmfile is the biggest public dependent of this project, as far as I know. I believe those two points would justify the move to the Helmfile org if moving it to somewhere is okay in the first place.

Enhancement - SSM Version Support

I would like to be able to choose the version of a SSM parameter. Such that:
ref+awsssm://PATH/TO/PARAM[?region=REGION][?version=VERSION]

Creating a PR for this

Multiple ref in same line without whitespace fail to parse

Template 1 in helmfile for logstash:

output {
  elasticsearch {
    hosts => ["ref+awsssm://pre0/pre1/pre2/?region=us-east-1#/host:ref+awsssm://pre0/pre1/pre2/?region=us-east-1#/port"]
  }
}

Error 1:

SSM: successfully retrieved key=/pre0/pre1/pre2/
in /Users/path/to/helmfile/stable/filebeat-logstash.yml: failed processing release logstash: expand awsssm://pre0/pre1/pre2/?region=us-east-1#/host:ref+awsssm://pre0/pre1/pre2/?region=us-east-1#/port"]: no value found for key host:ref+awsssm://pre0/pre1/pre2/?region=us-east-1#/port"]

It doesn't stop at the : which is a disallowed char

Valid characters: Parameter names can consist of the following symbols and letters only: a-zA-Z0-9_.-/

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-parameter-name-constraints.html

I'll try to look at it too to see where this parsing happens, but I'm guessing you'll be able to find it faster ๐Ÿคฃ ๐Ÿ˜Š

base64-encoded SOPS doesn't actually work

Example file:

foo: ref+sops://ew0KCSJkYXRhIjogIkVOQ1tBRVMyNTZfR0NNLGRhdGE6dUhnZyxpdjpaRHVpRk5rN1VtRmpBMzR0Smh2ZUttbWNzdzEyNDlvcElnV2lJTnlVQlc4PSx0YWc6cWVKNElDWnF3OWtpQlFFMFNTTGxQUT09LHR5cGU6c3RyXSIsDQoJInNvcHMiOiB7DQoJCSJrbXMiOiBbDQoJCQl7DQoJCQkJImFybiI6ICJhcm46YXdzOmttczp1cy13ZXN0LTI6NDgyNzgwODM1NzA3OmFsaWFzL3VzLXctMi1rZXkiLA0KCQkJCSJjcmVhdGVkX2F0IjogIjIwMjItMDYtMDdUMjE6NTg6MDRaIiwNCgkJCQkiZW5jIjogIkFRSUNBSGh5aThoUW9HTE9FNDZQVkp5aW5IZU9wY0tNakM2RlpPNkpnNU5EOWJsc2RBRVg3N3hZWUcvSEE2T3pxUGdYRzhJWUFBQUFmakI4QmdrcWhraUc5dzBCQndhZ2J6QnRBZ0VBTUdnR0NTcUdTSWIzRFFFSEFUQWVCZ2xnaGtnQlpRTUVBUzR3RVFRTXEwQVR3K1BEK0FPYnNpT0NBZ0VRZ0R2VHVBOEk1eWZjV283bnY1RFlnT1lNd0lZTnJ0aHNBSThNbFl3ZHlrQlBnU20ydFRySE5KRlp5Rk1NVEp0TXZNS0VxWHlxN09INkZ6bWNBdz09IiwNCgkJCQkiYXdzX3Byb2ZpbGUiOiAiIg0KCQkJfQ0KCQldLA0KCQkiZ2NwX2ttcyI6IG51bGwsDQoJCSJhenVyZV9rdiI6IG51bGwsDQoJCSJoY192YXVsdCI6IG51bGwsDQoJCSJsYXN0bW9kaWZpZWQiOiAiMjAyMi0wNi0wN1QyMTo1ODowNVoiLA0KCQkibWFjIjogIkVOQ1tBRVMyNTZfR0NNLGRhdGE6RDFUN3YyN1JyeXdTNGFoYWZLWUZIZjliMGk4SytnNnV6TFNpSFE4dEpOVFZlOWJYQTV5ZUpjV1JCVjNheERCMkwzUitUMVY1dTN1bk5WcHcvTFd2S0hvSFBuMGRmYy9naVpsc1ZVSzJTOEN4dmxrSitaQll0dTFWd1Y2dmRsWFBrZmJhQmxGdnljQTNkY3ZteXUrRWtsWW83SXBpSFVOMm00RXFqWmZBMWVrPSxpdjpyb2lXZDlYenBiMGRoTUE5a2s3ZE03dnk3TkhhWDRsa0dYbmRhSGpsR1gwPSx0YWc6c3JpNzVPeG53ZVVWS3RZUTNUclRLZz09LHR5cGU6c3RyXSIsDQoJCSJwZ3AiOiBudWxsLA0KCQkidW5lbmNyeXB0ZWRfc3VmZml4IjogIl91bmVuY3J5cHRlZCIsDQoJCSJ2ZXJzaW9uIjogIjMuNi4wIg0KCX0NCn0NCg==?key_type=base64

Result:

$ vals eval -f example.yaml
expand sops://ew0KCSJkYXRhIjogIkVOQ1tBRVMyNTZfR0NNLGRhdGE6dUhnZyxpdjpaRHVpRk5rN1VtRmpBMzR0Smh2ZUttbWNzdzEyNDlvcElnV2lJTnlVQlc4PSx0YWc6cWVKNElDWnF3OWtpQlFFMFNTTGxQUT09LHR5cGU6c3RyXSIsDQoJInNvcHMiOiB7DQoJCSJrbXMiOiBbDQoJCQl7DQoJCQkJImFybiI6ICJhcm46YXdzOmttczp1cy13ZXN0LTI6NDgyNzgwODM1NzA3OmFsaWFzL3VzLXctMi1rZXkiLA0KCQkJCSJjcmVhdGVkX2F0IjogIjIwMjItMDYtMDdUMjE6NTg6MDRaIiwNCgkJCQkiZW5jIjogIkFRSUNBSGh5aThoUW9HTE9FNDZQVkp5aW5IZU9wY0tNakM2RlpPNkpnNU5EOWJsc2RBRVg3N3hZWUcvSEE2T3pxUGdYRzhJWUFBQUFmakI4QmdrcWhraUc5dzBCQndhZ2J6QnRBZ0VBTUdnR0NTcUdTSWIzRFFFSEFUQWVCZ2xnaGtnQlpRTUVBUzR3RVFRTXEwQVR3K1BEK0FPYnNpT0NBZ0VRZ0R2VHVBOEk1eWZjV283bnY1RFlnT1lNd0lZTnJ0aHNBSThNbFl3ZHlrQlBnU20ydFRySE5KRlp5Rk1NVEp0TXZNS0VxWHlxN09INkZ6bWNBdz09IiwNCgkJCQkiYXdzX3Byb2ZpbGUiOiAiIg0KCQkJfQ0KCQldLA0KCQkiZ2NwX2ttcyI6IG51bGwsDQoJCSJhenVyZV9rdiI6IG51bGwsDQoJCSJoY192YXVsdCI6IG51bGwsDQoJCSJsYXN0bW9kaWZpZWQiOiAiMjAyMi0wNi0wN1QyMTo1ODowNVoiLA0KCQkibWFjIjogIkVOQ1tBRVMyNTZfR0NNLGRhdGE6RDFUN3YyN1JyeXdTNGFoYWZLWUZIZjliMGk4SytnNnV6TFNpSFE4dEpOVFZlOWJYQTV5ZUpjV1JCVjNheERCMkwzUitUMVY1dTN1bk5WcHcvTFd2S0hvSFBuMGRmYy9naVpsc1ZVSzJTOEN4dmxrSitaQll0dTFWd1Y2dmRsWFBrZmJhQmxGdnljQTNkY3ZteXUrRWtsWW83SXBpSFVOMm00RXFqWmZBMWVrPSxpdjpyb2lXZDlYenBiMGRoTUE5a2s3ZE03dnk3TkhhWDRsa0dYbmRhSGpsR1gwPSx0YWc6c3JpNzVPeG53ZVVWS3RZUTNUclRLZz09LHR5cGU6c3RyXSIsDQoJCSJwZ3AiOiBudWxsLA0KCQkidW5lbmNyeXB0ZWRfc3VmZml4IjogIl91bmVuY3J5cHRlZCIsDQoJCSJ2ZXJzaW9uIjogIjMuNi4wIg0KCX0NCn0NCg==?key_type=base64: Error unmarshalling input json: invalid character 'e' looking for beginning of value

I think the problem here is that we're passing the base64-encoded string directly to decrypt.Data(), but, decrypt.Data() doesn't take base64-encoded data, it takes the same json/yaml blob that you'd find on disk in a file passed to decrypt.File().

Honestly for this kind of usage I'd rather have e.g. ref+awskms://<ciphertext>, but, I figured it's worth pointing out that a documented feature doesn't actually work.

I'm not much of a Go user, but this seems simple enough to fix that I might be able to create PR for it.

Bug: Triple dashes in YAML/JSON file to be evaluated

Triple dashes in YAML/JSON file to be evaluated seem to allow only for a partial evaluation.

A file foo.yaml with following content:

foo: "bar"
---
baz: "qux"

only returns foo: "bar" when evaluated with vals eval -f foo.yaml

question: vault/ssm/awssecrets testing approach

Hi!
There are TODO sections like this one: prerequisites for unit-tests to run successfully (some operations are needed to be run manually from console).
Wondering what's the best way to fully automate tests.
Should vault[/awssecrets/ssm] just be mocked?
Or (at least in case of vault) we may start test vault cluster for example in TestMain() and populate it with test data?

Feature: Add support for reading Terraform remote state from http(s) backends

Several providers, such as Terraform Cloud, Terraform Enterprise, and Gitlab support terraform remote state using the http backend. When these backends are used in conjunction with Terragrunt, the location of the local state file that points to the remote state is not deterministic (Terragrunt creates temporary working directories with random subfolders). Hence, it's impossible to point ref+tfstate://... to the correct location. Similar to how support was added for specific Cloud bucket backends, I'm proposing adding support for a generic http backend, something like ref+https://...

For reference, Gitlab (which is the backend I'm most interested in) exposed remote state at a URL like this:

https://gitlab.com/api/v4/projects/${PROJECT_ID}/terraform/state/${RESOURCE_PATH}

feat: environment variables provider

Retrieving values from the environment would be a great addition.

It could look like this:

$ echo 'shell: "ref+env://SHELL"' | vals eval -f -
shell: /usr/bin/zsh

feat: allow skipping errors on unreachable key

vals could get a new cli argument that would skip resolving errors when they happen.

Actual behavior

$ cat <<EOF | vals eval -f - > /tmp/values.yaml
foo: bar
secret: "ref+vault://mountpoint/path/to/non-existing/key"
EOF
vault: get string failed: path="mountpoint/path/to/non-existing", key="key"
expand vault://mountpoint/path/to/non-existing/key: vault: get string: key "path/to/non-existing/key" does not exist in "mountpoint"
$ cat /tmp/values.yaml
$

Expected behavior

$ cat <<EOF | vals eval --skip-errors -f - > /tmp/values.yaml
foo: bar
secret: "ref+vault://mountpoint/path/to/non-existing/key"
EOF
vault: get string failed: path="mountpoint/path/to/non-existing", key="key"
expand vault://mountpoint/path/to/non-existing/key: vault: get string: key "path/to/non-existing/key" does not exist in "mountpoint"
$ cat /tmp/values.yaml
foo: bar
$

Retriving multiple parameters by path from SSM fails

Hi,

I'm trying to retrieve multiple parameters based on path in SSM. Using the example provided in the README.md:

NAME        VALUE
/foo/bar    {"BAR":"VALUE"}
/foo/bar/a  A
/foo/bar/b  B

What I'm trying to do is assign:

foo: ref+awsssm://foo?recursive=true

And have the following result:

foo:
  bar:
     a: A
     b: B

Right now, when I try it I'm getting a error like:

expand awsssm://foo?recursive=true: get parameter: ParameterNotFound:

Is doing this possible? If not, I would be happy to send the PR if there's some basic guidance on how to make the changes.

Thanks for the great work!

Vault auth_method kubernetes not supported?

As I take a look in the code, it seems that only two auth_methods are supported : token and appRole.

In our case, vault is deployed as a pod in kubernetes cluster. And we want to integrate vals in our CD pipeline.
So I think supporting kubernetes as one of the auth_method is imperative. Otherwise we have to execute vals inside the pod.

I'm pretty new to the concept of vault. So it would be very appreciated if you can kindly give us an advice in our configuration.

Question: tfstateazurerm not finding tfstate blob in subdirectories

I'm trying to look for the terragrunt.tfstate file which located in subdirectories.
Lets assume I want to look for terragrunt.tfstate file under this key: terraform-state/base/cloud-resources/terragrunt.tfstate

echo "output: ref+tfstateazurerm://rg-test/storage_account/terraform-state/base/cloud-resources/terragrunt.tfstate" | vals eval -f -

vals ignore folders after base and the exception below appear:

Code: BlobNotFound
GET https://storage_account.blob.core.windows.net/terraform-state/base?timeout=61
Authorization: REDACTED

Missing `version` subcommand

I wanted to double check that I was using the most recent version of vals. I expected there to be a version subcommand, similar to kubectl version and helm version, but turns out there's no such command. Another common alternative is a --version flag, but that does not exist either.

I'm in favor of having both the subcommand and flag implemented, but if I had to choose I'd choose the subcommand since that's how helm does it, and vals is marketing itself as being "Helm-like".

Expected behavior

$ vals version
v0.14.0

Actual behavior

$ vals version
vals is a Helm-like configuration "Values" loader with support for various sources and merge strategies

Usage:
  vals [command]

Available Commands:
  eval		Evaluate a JSON/YAML document and replace any template expressions in it and prints the result
  exec		Populates the environment variables and executes the command
  env		Renders environment variables to be consumed by eval or a tool like direnv
  ksdecode	Decode YAML document(s) by converting Secret resources' "data" to "stringData" for use with "vals eval"

Use "vals [command] --help" for more infomation about a command

System info

vals version: v0.14.0

Backend type gcs is not supported referencing remote tfstate

After #34 was improved I've finally had a chance to test remote backend when referencing tfstate again, and now I see the real error instead of panic:

err: expand tfstate:////builds/devops/k8s/shared-services/helmfile-shared-services-staging/.terraform/terraform.tfstate/output.external-ip.value: reading tfstate for /builds/devops/k8s/shared-services/helmfile-shared-services-staging/.terraform/terraform.tfstate/output.external-ip.value: backend type gcs is not supported
changing working directory back to "/builds/devops/k8s/shared-services/helmfile-shared-services-staging"
in ./helmfile.yaml: in .helmfiles[0]: in /builds/devops/k8s/shared-services/helmfile-shared-services-staging/.helmfile/cache/https_my_domain_com_my_project_k8s_shared-services_helmfile-common_git.ref=v0.37/helmfile.yaml: expand tfstate:////builds/devops/k8s/shared-services/helmfile-shared-services-staging/.terraform/terraform.tfstate/output.external-ip.value: reading tfstate for /builds/devops/k8s/shared-services/helmfile-shared-services-staging/.terraform/terraform.tfstate/output.external-ip.value: backend type gcs is not supported

In my helmfile I'm setting this the following way:

...
service:
  loadBalancerIP: {{- if ne .Environment.Name "default" }} ref+tfstate:///{{ env "PATH_TO_TF_STATE" }}/output.external-ip.value {{- end }}
...

and in CI I'm just doing:

...
export PATH_TO_TF_STATE=$(pwd)/.terraform/terraform.tfstate
...

Haven't checked on a plain vals though, I'm using it as a part of helmfile

HELMFILE_VERSION=v0.130.0

SSM Parameter Store secret revealed to stderr when parameter version is specified

Expected output happens when no version is specified:

% echo "servicefoo: ref+awsssm://service/servicefoo/paramfoo?region=us-east-1" | vals eval > /dev/null 
SSM: successfully retrieved key=/service/servicefoo/paramfoo

But if a parameter version is specified we get this output:

% echo "servicefoo: ref+awsssm://service/servicefoo/paramfoo?region=us-east-1&version=1" | vals eval > /dev/null 
SSM: successfully retrieved key=supersecretparamvalue

This results in our CI logs being full of secrets.

Feature Request | Allow Processing Multi-Resource Files

Assuming you would have only one Yaml block in the file, vals would work great .. e.g. something like:

apiVersion: v1
kind: Secret
metadata:
  name: some-secret
  namespace: default
stringData:
  mysql-password: refs+vault://secret/data/foo#/mykey
  mysql-root-password: vZQmqdGw3z
type: Opaque

However vals is unable to process something like:

apiVersion: v1
kind: Secret
metadata:
  name: secret1
  namespace: default
stringData:
  mysql-password: refs+vault://secret/data/foo#/mykey
  mysql-root-password: vZQmqdGw3z
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  name: secret2
  namespace: default
stringData:
  key1: refs+vault://secret/data/bar#/mykey
  key2: whatever
type: Opaque

Basically Vals would only process the first Yaml block correctly (secret1) and would completely ignore the second secret

New tag

@mumoshu would you please create a new tag with the latest PR merged?

Using AWS profiles for accessing SSM

Hey,

are there any way to instruct vals to use some AWS profile? IIUC from documentation golang SDK should support that, if AWS_SDK_LOAD_CONFIG is set to true and then profile could be selected with AWS_PROFILE or AWS_DEFAULT_PROFILE, but I can't get it working. My use case is to retrieve SSM parameter values in helmfile templates.

โžœ  ~/pr/q/eks-addons git:(aws-roles) โœ— aws sts get-caller-identity
{
    "UserId": "AIDADEADBEEF",
    "Account": "123",
    "Arn": "arn:aws:iam::123:user/artem.kajalainen"
}
โžœ  ~/pr/q/eks-addons git:(aws-roles) โœ— aws --profile example-dev-env sts get-caller-identity
{
    "UserId": "AROADEADBEEF:botocore-session-1584441505",
    "Account": "456",
    "Arn": "arn:aws:sts::456:assumed-role/circle-ci/botocore-session-1584441505"
}
โžœ  ~/pr/q/eks-addons git:(aws-roles) โœ— export AWS_DEFAULT_PROFILE=example-dev-env
โžœ  ~/pr/q/eks-addons git:(aws-roles) โœ— export AWS_PROFILE=example-dev-env
โžœ  ~/pr/q/eks-addons git:(aws-roles) โœ— export AWS_SDK_LOAD_CONFIG=true
โžœ  ~/pr/q/eks-addons git:(aws-roles) โœ— helmfile -e dev lint
Adding repo stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories

Adding repo bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

Updating repo
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "bitnami" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. โŽˆ Happy Helming!โŽˆ

Fetching stable/traefik
Fetching bitnami/external-dns
in ./helmfile.yaml: 2 errors:
err 0: failed to render values files "config/traefik1.7/dev-values.yaml": expand awsssm://dev/eu-north-1/eks/example-apps-cluster/public-ingress/lb-dns-name?region=eu-north-1: get parameter: AccessDeniedException: User: arn:aws:iam::123:user/artem.kajalainen is not authorized to perform: ssm:GetParameter on resource: arn:aws:ssm:eu-north-1:123:parameter/dev/eu-north-1/eks/example-apps-cluster/public-ingress/lb-dns-name
        status code: 400, request id: eb691d42-7f66-42fd-ae83-40528185b8be
err 1: failed to render values files "config/external-dns/dev-values.yaml": expand awsssm://dev/eu-north-1/eks/example-apps-cluster/external-dns/role-arn?region=eu-north-1: get parameter: AccessDeniedException: User: arn:aws:iam::123:user/artem.kajalainen is not authorized to perform: ssm:GetParameter on resource: arn:aws:ssm:eu-north-1:123:parameter/dev/eu-north-1/eks/example-apps-cluster/external-dns/role-arn
        status code: 400, request id: 9f680a2f-fd01-4882-a60f-abb830bdc8ab

are there any way to instruct vals to use some named aws profile or to assume role?

Thanks!

Failing SSM with all configurations

Hi, @mumoshu I'm super stoked SSM is in place. However, I'm unable to get it working from any of the configuration defined in the Readme. I took a quick look and didn't see any obvious bugs. I'll look again later to see if I can find anything.. But your help would be greatly appreciated. ๐Ÿ™

Here is my Helmfile:

repositories:
  - name: "stable"
    url: "https://kubernetes-charts.storage.googleapis.com"

releases:
  - name: "fluent-bit"
    namespace: "kube-system"
    chart: "stable/fluent-bit"
    version: "2.0.5"
    atomic: true
    wait: true
    values:
    - filter:
        mergeJSONLog: false
    - backend:
        type: es
        es:
          host: "ref+ssm://pre0/pre1#/pre2/key0?region=us-east-1" # Err 1
          # host: "ref+ssm://pre0/pre1/pre2/#/key0?region=us-east-1" # Err 1
          # host: "ref+ssm://pre0/pre1/pre2/?region=us-east-1#/key0" # Err 2
          # host: "ref+ssm://pre0/pre1/pre2/?region=us-east-1#key0" # Err 2
          # host: "ref+ssm://pre0/pre1/pre2/key0?region=us-east-1" # Err 0
          # host: "ref+ssm://pre0/pre1/pre2/key0" # Err 2
          # host: "ref+ssm://pre0/pre1/pre2#/key0" # Err 2
          # host: "ref+ssm://pre0/pre1/pre2?region=us-east-1#/key0" # Err 2
          # host: "ref+ssm://pre0/pre1?region=us-east-1#/pre2/key0" # Err 2

Here are the error messages I get with each of these configurations:

Err 0:

in /darwin/path/to/helmfile/archive/fluent-bit.yml: expand ssm://......................................: get parameter: ValidationException: Parameter name: can't be prefixed with "ssm" (case-insensitive). If formed as a path, it can consist of sub-paths divided by slash symbol; each sub-path can be formed as a mix of letters, numbers and the following 3 symbols .-_

Err 1:

in /darwin/path/to/helmfile/archive/fluent-bit.yml: expand ssm://......................................: ssm: get parameters by path: MissingRegion: could not find region configuration

Err 2:

in /darwin/path/to/helmfile/archive/fluent-bit.yml: expand ssm://......................................: ssm: get parameters by path: ValidationException: The parameter doesn't meet the parameter name requirements. The parameter name must begin with a forward slash "/". It can't be prefixed with \"aws\" or \"ssm\" (case-insensitive). It must use only letters, numbers, or the following symbols: . (period), - (hyphen), _ (underscore). Special characters are not allowed. All sub-paths, if specified, must use the forward slash symbol "/". Valid example: /get/parameters2-/by1./path0_.

I just can't crack it as it sits. ๐Ÿคทโ€โ™‚

Vault issue when using pipe (multiline)

Issue

Not able to fetch Vault secrets when using pipe multiline

values.yaml.gotmpl (WORKING)

certValue: {{ "ref+vault://{PATH}#certificate" }}

values.yaml.gotmpl (NOT WORKING)

certValue: | 
{{ "ref+vault://{PATH}#certificate" | indent 2 }}

---

in ./helmfile.yaml: in .helmfiles[0]: in ./helmfile.yaml: failed processing release client-cert: failed to render values files "certs/client-certificate/values.yaml.gotmpl": expand vault://{PATH}#certificate
: no value found for key certificate

Info

Helm

Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

Helmfile

helmfile version v0.139.9

Vault

1.3.1

Tried so many variations but no success and then I ran out of ideas, if this is a known issue and it should behavior like that maybe I will need to change the chart itself.

Any help is really appreciated.
Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.