Giter Club home page Giter Club logo

terraform-provider-sops's Introduction

terraform-sops

A Terraform plugin for using files encrypted with Mozilla sops.

NOTE: To prevent plaintext secrets from being written to disk, you must set up a secure remote state backend. See the official docs on Sensitive Data in State for more information.

Example

NOTE: All examples assume Terraform 0.13 or newer. For information about usage on older versions, see the legacy usage docs.

Encrypt a file using Sops: sops demo-secret.enc.json

{
  "password": "foo",
  "db": {"password": "bar"}
}

sops_file

terraform {
  required_providers {
    sops = {
      source = "carlpett/sops"
      version = "~> 0.5"
    }
  }
}

data "sops_file" "demo-secret" {
  source_file = "demo-secret.enc.json"
}

output "root-value-password" {
  # Access the password variable from the map
  value = data.sops_file.demo-secret.data["password"]
}

output "mapped-nested-value" {
  # Access the password variable that is under db via the terraform map of data
  value = data.sops_file.demo-secret.data["db.password"]
}

output "nested-json-value" {
  # Access the password variable that is under db via the terraform object
  value = jsondecode(data.sops_file.demo-secret.raw).db.password
}

Sops also supports encrypting the entire file when in other formats. Such files can also be used by specifying input_type = "raw":

data "sops_file" "some-file" {
  source_file = "secret-data.txt"
  input_type = "raw"
}

output "do-something" {
  value = data.sops_file.some-file.raw
}

sops_external

For use with reading files that might not be local.

input_type is required with this data source.

terraform {
  required_providers {
    sops = {
      source = "carlpett/sops"
      version = "~> 0.5"
    }
  }
}

# using sops/test-fixtures/basic.yaml as an example
data "local_file" "yaml" {
  filename = "basic.yaml"
}

data "sops_external" "demo-secret" {
  source     = data.local_file.yaml.content
  input_type = "yaml"
}

output "root-value-hello" {
  value = data.sops_external.demo-secret.data.hello
}

output "nested-yaml-value" {
  # Access the password variable that is under db via the terraform object
  value = yamldecode(data.sops_file.demo-secret.raw).db.password
}

Install

For Terraform 0.13 and later, specify the source and version in a required_providers block:

terraform {
  required_providers {
    sops = {
      source = "carlpett/sops"
      version = "~> 0.5"
    }
  }
}

CI usage

For CI, the same variables or context that SOPS uses locally must be provided in the runtime. The provider does not manage the required values.

Development

Building and testing is most easily performed with make build and make test respectively.

The PGP key used for encrypting the test cases is found in test/testing-key.pgp. You can import it with gpg --import test/testing-key.pgp.

Transitioning to Terraform 0.13 provider required blocks.

With Terraform 0.13, providers are available/downloaded via the terraform registry via a required_providers block.

terraform {
  required_providers {
    sops = {
      source = "carlpett/sops"
      version = "~> 0.5"
    }
  }
}

A prerequisite when converting is that you must remove the data source block from the previous SOPS provider in your terraform.state file. This can be done via:

terraform state replace-provider registry.terraform.io/-/sops registry.terraform.io/carlpett/sops

If not you will be greeted with:

- Finding latest version of -/sops...

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider -/sops:
provider registry registry.terraform.io does not have a provider named
registry.terraform.io/-/sops

terraform-provider-sops's People

Contributors

adebasi avatar carlpett avatar chroju avatar dependabot[bot] avatar dfredell avatar fardarter avatar jacobfoard avatar knqyf263 avatar lazouz avatar multani avatar nhuray avatar othmane399 avatar samcday avatar scjudd avatar timodempwolf avatar waddles avatar yujunz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-sops's Issues

Did you know

Did you know this also works? I just kind of found out accidentally and figured I should share it.

data "external" "secrets" {
  program = ["sops", "-d", "${path.module}/secrets.tfvars.json"]
}

`something = "${data.external.secrets.result["some_key"]}"`

Is there anything else this provider can do that the external data source cannot?

Terraform Cloud Usage

I'm having trouble getting this to work with Terraform Cloud. Has anyone else gotten it to work? I setup the binary in terraform.d/plugins/linux_amd64/ successfully, but I'm having difficulty getting the AWS credentials working. I have it working locally with AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY environment variables, but when I setup those same environment variables in Terraform Cloud it doesn't seem to find them. It just keeps outputting the sops error of Error: Error getting data key: 0 successful groups required, got 0

Any tips?

Feature request - checksum attribute

Idea : Add an attribute with the md5 checksum of the secret.
This way I can easily add an annotation on a deployment to trigger an update when the secret change..

SOPS provider fails to decrypt the file when only certain keys are encrypted.

Hi @carlpett ,
Here is my use case
This is my test.yaml file

global:
    hello: world
local:
    dummy:
    -   alpha: beta

I am using my AWS KMS key to encrypt/decrypt the data

I used --encrypted-regex to encrypt only key alpha. It gets encrypted fine, using command
sops -i --encrypt --encrypted-regex '^(alpha)$' test.yaml

This is how my encrypted files looks like

global:
    hello: world
local:
    dummy:
    -   alpha: ENC[AES256_GCM,data:Nc/Ngg==,iv:k5H4i9FIgf+XPLCeCKP6pWNYFtaKGriquD701Qqv2ro=,tag:FJyHiVqsFYmU26J36GGPOg==,type:str]
sops:
    kms:
    -   arn: XXXXXXXXX
        created_at: '2020-03-09T07:45:47Z'
        enc: XXXXXXXXXXXXXXXXXXXX
        aws_profile: ""
    gcp_kms: []
    azure_kv: []
    lastmodified: '2020-03-09T07:45:49Z'
    mac: XXXXXXXXXXXXXXXXXX
    pgp: []
    encrypted_regex: ^(alpha)$
    version: 3.5.0

It gets decrypted also without any issue, when using this command:
sops -i --decrypt test.yaml

But when used with Terraform SOPS provider code, It fails with following error

data "sops_file" "test_secret" {
  source_file = "test.yaml"
  input_type = "yaml"
}

Error:

Error: Error refreshing state: 1 error occurred:
	* data.sops_file.test_secret: 1 error occurred:
	* data.sops_file.test_secret: data.sops_file.test_secret: Error walking tree: Could not decrypt value: Input string world does not match sops' data format

As you can see here, terraform provider is trying to decrypt a string world which was never encrypted. Hence the error.

I tried searching all over the internet, but could not find any solution, hence reaching out to you.

GCP service account impersonation permission issues

We run our terraform projects with service account impersonation (https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#impersonating-service-accounts), here is a sample of the configuration:

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "3.89.0"
    }
    sops = {
      source = "carlpett/sops"
      version = "~> 0.5"
    }
  }
  backend "gcs" {
    prefix = "terraform/state"
  }
}

provider "google" {
  project                     = var.project
  region                      = "europe-west1"
  zone                        = "europe-west1-c"
  impersonate_service_account = var.terraform_service_account
}

Has there been any thoughts around supporting this? Right now we need to grant the required permissions for decrypting to the service account assuimg the TF service account. As an example, when running in cloud build we need to grant Cloud KMS CryptoKey Decrypter to the cloud build service account

Getting all element of a list

I have a json file that contain a list like this:

{
	"list": [
		"banana",
		"apple"
	]
}

I can pass a singular elements with data.sops_file.secret.data["list.0"], but I couldn't find a way to pass the whole list. Is there any way to get this supported? Something like data.sops_file.secret.data["list"] would return the list

Issue with aws_profile and profiles stored in ~/.aws/config

SOPS 3.6 added the ability to use AWS profiles defined in your AWS Config file (not just static credentials defined in ~/.aws/credentials).

I have upgraded to SOPS 3.6 and have this working in SOPS natively (e.g aws_profile in secrets.yaml points to a profile defined in ~/.aws/config and sops -d secrets.yaml works):

sops -d secrets.yaml
my-password: REDACTED
[...]
my-other-password: REDACTED

However it does not work in terraform-provider-sops:

Error: Error getting data key: 0 successful groups required, got 0

~/.aws.config:

[profile my-profile]
region = ap-southeast-2
credential_process = my-helper-script

secrets.yaml:

sops:
    kms:
    -   arn: REDACTED
        created_at: '2019-05-13T02:53:09Z'
        enc: REDACTED
        aws_profile: my-profile

provider.tf:

provider "sops" {
  version = "0.5.0"
}

data "sops_file" "secrets" {
  source_file = "secrets.yaml"
}

I have tried setting the environment variable AWS_SDK_LOAD_CONFIG=1 but that doesn't help either.

sops provider on alpine with 0.3.2 seems to not be working

Hey,
I just ran into the issue that the provided binary for linux_amd64 of version 0.3.2 seems to not be working on alpine.
The provided binary for 0.3.1 works perfectly fine.

Reproduce this issue
Run a docker image with alpine:latest

$ wget https://github.com/carlpett/terraform-provider-sops/releases/download/v0.3.2/terraform-provider-sops_v0.3.2_linux_amd64.zip
$ unzip terraform-provider-sops_v0.3.2_linux_amd64.zip
$ ./terraform-provider-sops_v0.3.2

Output:
./terraform-provider-sops_v0.3.2: not found

$  wget https://github.com/carlpett/terraform-provider-sops/releases/download/v0.3.1/terraform-provider-sops_v0.3.1-linux-amd64
$ chmod +x terraform-provider-sops_v0.3.1-linux-amd64
./terraform-provider-sops_v0.3.1-linux-amd64

Output:

"This binary is a plugin. These are not meant to be executed directly.
Please execute the program that consumes these plugins, which will
load any plugins automatically"

So if I am not mistaken here, something seems to be missing on that 0.3.2 binary for alpine. Ubuntu works perfectly fine.

Maybe this can be helpful for someone who runs into the same issue.

Question: What is the difference between sops_external and sops_file

I am confused about the use-cases of sops_external and sops_file. This may be due to my inexperience in terraform.

The documentation says that sops_external is used to read files that might not be local, but gives a local_file provider as an example (which is counter-intuitive). Also sops_file and sops_external examples are almost equivalent.

When would you use one or the other ?

issue with ~/.aws/config using credential_process

Hello,

I've an issue with using terraform-sops provider without credentials

~/.aws/config:

[profile dev]
credential_process = aws-vault exec dev-sso --json --prompt=osascript

By, trying plan over some resources on terraform, I got this error:

Error: Error getting data key: 0 successful groups required, got 0

  on datasources.tf line 103, in data "sops_file" "dbcreds":
 103: data "sops_file" "dbcreds" {
  source_file = "../../ns/${var.env}/secrets/postgresql-password.yml"
}

I've bumped to version 0.5.2 to support SOPS 3.6.0 but without success , how can I debug please ?

I'm able de decrypt & reencrypt using sops go binary 3.6.0

does not have a provider named registry.terraform.io/hashicorp/sops

This provider does not seem to work with Terraform .12 under Linux (Ubuntu):

wget https://github.com/carlpett/terraform-provider-sops/releases/download/v0.5.3/terraform-provider-sops_0.5.3_linux_amd64.zip
unzip terraform-provider-sops_0.5.3_linux_amd64.zip
mv terraform-provider-sops_v0.5.3 ~/.terraform.d/plugins/

✗  terraform --version
Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/google v3.43.0
+ provider registry.terraform.io/hashicorp/google-beta v3.43.0
+ provider registry.terraform.io/hashicorp/local v2.0.0

✗  head -n1 sops.tf 
provider "sops" {}

✗  terraform init
Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Using previously-installed hashicorp/google v3.43.0
- Using previously-installed hashicorp/local v2.0.0
- Using previously-installed hashicorp/google-beta v3.43.0
- Finding latest version of hashicorp/sops...

Error: Failed to install provider

Error while installing hashicorp/sops: provider registry registry.terraform.io
does not have a provider named registry.terraform.io/hashicorp/sops

Add support for .tfvars files

Hello, I was wondering if there is any plan (assuming there is a way to implement it, which I'm not sure) to add support for encrypted .tfvars files in addition to yaml and json.

I have used encrypted .tfvars files with SOPS and terragrunt in the past, relying on terragrunt pre/post hooks to decrypt and encrypt .tfvars files before I fed them to terraform using -var-file flags.

I think it would be nice if we could have support to .tfvars files here, in order to have all the consistency checks provided by terraform on variables (type checking, check if the have been declared, etc.).

To be more specific, this is what I have in mind:

# secrets.enc.tfvars
password = superSecret
# main.tf
terraform {
  required_providers {
    sops = {
      source  = "carlpett/sops"
      version = "~> 0.5"
    }
  }
}

data "sops_file" "secret" {
  # The content is actually loaded as terraform variables
  source_file = "secrets.enc.tfvars"
  input_type  = "tfvars"
}

variable "password" {
  type    = string
}

output "password" {
  value = var.password
}

Do you think it's actually possible to implement this? If yes, would you be interested in this feature?

Using KMS key in other account in Terraform Cloud

Hi,
Is there a way to provide AWS credentials to the SOPS provider so it uses those instead of the environment ones?

I have a sops yaml file encoded using a KMS key. The KMS key is defined in an AWS account, call it account A.

Account A -> KMS -> sops file.

Now, I have a workspace in Terraform Cloud and I feed the workspace with the env vars for account A:
AWS_ACCESS_KEY_ID=AKIAXXXXXXXXXXXXX123
AWS_SECRET_ACCESS_KEY=MYSECRETACCESSKEY

When using the sops provider in Terraform Cloud, it looks like the sops_file data object will only decode the file if the AWS keys above grant access to the KMS key. However this won't work for more complex scenarios where I run in multiple accounts and the key will reside in an account which is different to the keys in those env vars.

So what I need is something like:

env vars:
AWS_ACCESS_KEY_ID=AKIAZZZZZZZZZZZZZ456
AWS_SECRET_ACCESS_KEY=ANOTHERACCESSKEY

provider sops {
aws_access_key_id="AKIAXXXXXXXXXXXXX123"
aws_secret_access_key="MYSECRETACCESSKEY"
}

Is this possible?

can encrypt with a configured or specific key?

hello - this is a general question about this provider. Unclear how / if this can be done:

i'd like to use terraform to provision a new key and a new ring. Then i'd like to sops-encrypt a local file with that key.
Ideally then i can use the sops provider output for other recipes.

i guess i'm asking if i can specify key and then like it would be something like

depends_on: [google_kms_crypto_key.myrobot.id]

Or maybe: could be done? Is there a clear path for a contribution? :-)

Invalid index error using terraform import

While my current sops setup works for the terraform plan operation, it doesn't seem to work with the terraform import operation.
I'm setting up a provider and one resource with the following setup.

data "sops_file" "secrets" {
  source_file = var.secret_file_path
}

provider "slack" {
  token = data.sops_file.secrets.data["slack.token"]
}
[some secrets]
slack:
    token: abcdefg

While during terraform init, terraform plan and terraform apply everything works flawless, I get the following error trying to terraform import an existing resource to the terraform managed one:

╷
│ Error: Invalid index
│ 
│   on /home/bp/src/kineo/common-resources-infrastructure/aws/slack.tf line 29, in provider "slack":
│   29:   token = data.sops_file.secrets.data["slack.token"]
│     ├────────────────
│     │ data.sops_file.secrets.data has a sensitive value
│ 
│ The given key does not identify an element in this collection value.
╵

If I provide the token "directly" without sops, the import works.
Is that a bug or am I missing something?

Failing decrypting from sops age key as environment variable.

Hi Calle,

First and foremost thanks for your great work with terraform sops provider.

To reduce the risk of having files containing secrets keys around the filesystem I have
applied the patch suggested on getsops/sops#946. This patch
will allow exposing the SOPS secret key as an environment variable rather than a key
file, moreover will enhance the automation experience with GH actions or any other CI
tool. Because the sops PR 946 actually solves a big problem, I took the matter on my
own hands and compiled a sops from PR496 which includes the possibility to expose
the private key such as SOPS_AGE_KEY environment variable.

I have generated a new age key pair and exposed as:

export SOPS_AGE_RECIPIENTS=age1foobarfoobarfoobarfoobar

export SOPS_AGE_KEY=AGE-SECRET-KEY-XXXXXXXXXYYYYYYYYYYYYZZZZZZZZZZ

Created a secret.yaml sops file and added some custom test secrets
with sops secret.yaml. This actually proves that SOPS_AGE_KEY works
as expected. However, when I tried to read those secrets from terraform
configured with sops provider I get this back.

│ Error: Error getting data key: 0 successful groups required, got 0

│ with data.sops_file.secrets,
│ on locals.tf line 2, in data "sops_file" "secrets":
│ 2: data "sops_file" "secrets" {

To me looks like the provider is not passing the SOPS_AGE_KEY variable.
But if I unset the SOPS_AGE_KEY and I use export SOPS_AGE_KEY_FILE="${PWD}/key.txt
everything works seamlessly and I'm able to read my secrets and create the resources
wanted.

Any idea would be really appreciated.

Best regards,

Phillip

Terraform 12 Support

trying to decrypt a simple sops file with terraform 0.12.x and this provider I've run into an issue. It worked fine in terraform 0.11.14 but now I'm getting an api version error which I'm having trouble finding any real documentation on.

has terraform 12 compatibility been confirmed already?

❯ tf plan

Error: Failed to instantiate provider "sops" to obtain schema: Incompatible API version with plugin. Plugin version: 4, Client versions: [5]

I am calling this from within a module. So my module initializes the provider and the module decrypts the requested file and returns in the requested hcl type. Shouldn't break anything but ... ya never know

Nesting stopped working with Terraform v0.12

The nested json/yaml example stopped working with Terraform v0.12. Root values continue to work as expected.

secret.json

{
  "password": "foo",
  "db": {"password": "bar"}
}

main.tf

...
output "test" {
  value = data.sops_file.test.data.db.password
}

Output:

Error: Unsupported attribute

  on main.tf line 29, in output "test":
  29:   value = data.sops_file.test.data.db.password

This value does not have any attributes.

Version info:
Provider v0.3.2
Sops v3.2.0
Terraform v0.11.14 (works)
Terraform v0.12.2 (does NOT work)

Cannot use yaml dictionaries to populate terraform map() type

I have a terraform 12 module that accepts a map(string) as an input variable. (They are tags eventually passed to AWS).

Example sops encrypted yaml file:

tags:
  bar: 'one'
  jax: 'two'

In my terraform module i'd have something like:

locals {
  tags =  data.sops_file.secrets.data["tags"]
}

But this errors with :

Error: Invalid index
  on vars.tf line 12, in locals:
  12:   tags       = data.sops_file.secrets.data["tags"]
    |----------------
    | data.sops_file.secrets.data is map of string with 20 elements
The given key does not identify an element in this collection value.

Issue with Terraform SOPS provider, PGP key and Terraform Cloud running on custom terraform cloud agents

The SOPS Terraform provider doesn't seem to run correctly on the Terraform Cloud Agents, fails during refresh phase, before the actual plan.
Versions of tools used:

• terraform 0.13.4 
• terraform-provider-sops v0.6.0
• gpg (GnuPG) 2.2.19
• libgcrypt 1.8.5

I have custom Docker images running as TFC Agents, so I can install/customise tools to debug this further. During Docker build of those images, I import a secret PGP key used for the encryption of the secrets in the keychain using gpg --import

Whenever I run the Terraform Cloud workspace, it first tries to do a plan (and as a part of that, because it's terraform 0.13.4, it tries to do a refresh of the data sources). That step fails with the following obscure error that doesn't offer enough information about what's specifically failing:

Terraform v0.13.4
Initializing plugins and modules...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.sops_file.gitlab_secrets_sops: Refreshing state... [id=-]
data.sops_file.cloudflare_secrets_sops: Refreshing state... [id=-]
data.sops_file.yandex_secrets_sops: Refreshing state... [id=-]
data.sops_file.datadog_secrets_sops: Refreshing state... [id=-]
data.sops_file.fluxcd_secrets_sops: Refreshing state... [id=-]
module.rc_db.random_password.pwd: Refreshing state... [id=none]

Error: Error getting data key: 0 successful groups required, got 0

  on data.tf line 8, in data "sops_file" "fluxcd_secrets_sops":
   8: data "sops_file" "fluxcd_secrets_sops" {

Error: Error getting data key: 0 successful groups required, got 0

  on data.tf line 12, in data "sops_file" "gitlab_secrets_sops":
  12: data "sops_file" "gitlab_secrets_sops" {

Dropping in a session on the agent, and installing SOPS, I can decrypt the secret files manually using sops -d just fine, so the right PGP key seems to be imported correctly.

I wonder why during the plan/refresh is failing but running sops -d manually on the agent works. 🤔

I'm running out of ideas, is there any way to enable the sops --verbose logging option for SOPS using the Terraform Sops Provider, or are there any other things I could try to debug this, considering the logs are not very detailed/helpful with regards to what's failing?

I tried also set the TF_LOG to DEBUG but this didn't help for sops provider.

Thanks for the help.

darwin_arm64 support

Hi @carlpett ,
thank you very much for this project!

Could you please add support for darwin_arm64?
Here's the error that I see when trying to perform terraform init for my project

╷
│ Error: Incompatible provider version
│ 
│ Provider registry.terraform.io/carlpett/sops v0.6.2 does not have a package available for your current platform, darwin_arm64.

Thank you very much!

Regards, Eugene

Option to mark data as NOT Sensitive.

I am struggling with adding the
Username and Password of RDS AWS resource in sops encrypted file.

I am Using https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest RDS module for creating my Postgres DB's
But because the way the modules are constructed they attempt to output the Username.
And because the username is marked as Sensitive from the decryption module terraform plan crashes.

❯ terraform plan
╷
│ Error: Output refers to sensitive values
│
│   on .terraform/modules/master/modules/db_instance/outputs.tf line 76:
│   76: output "this_db_instance_username" {
│
│ Expressions used in outputs can only refer to sensitive values if the sensitive attribute is true.

I know this error is not related to your module at first glance.
But the error occurs because you tag each value as sensitive.

If we get a way to have control over that i would be super grateful.

In my case my sub modules control what is sensitive or not.
So i would prefer them to do the control.

/Peter

Use proper terraform plugin versioning

It is currently not possible to specify any version of the sops provider as it does not contain a version tag. When I run a plan with TF_LOG=DEBUG it shows me the version numbers of all plugins but the sops version always shows 0.0.0, for example:

2019/06/03 16:29:42 [DEBUG] found valid plugin: "sops", "0.0.0", "/Users/waddles/.terraform.d/plugins/terraform-provider-sops"

As a consequence, deleting the ~/.terraform.d/ directory and reinstalling will build the latest commit which has breaking changes to support Terraform 0.12. Also you only provide binaries for Linux so that means anyone not using Linux but still using Terraform 0.11 has to manually build the binary from an earlier commit.

Please provide binaries for other platforms and also specify the version of the plugin. No doubt this will also be required for #11

SOPS with Multi account AWS account

Team,

We are trying to integrate the SOPS solution with our AWS secret manager using terraform. We have multiple AWS accounts and in one of the accounts, we manage all the secrets(for all of the accounts).

Am trying the following method to push the encrypted secret to the AWS secret manager of the different accounts.

  1. Connected to Account A (using AWS credentials )
  2. Encrypted the secret file using the KMS key ( Used STS from Account B).
   > aws sts assume-role \
     --role-arn <arn> \
     --role-session-name <temp session>
  >  sops -k $aws_kms_dev_arn -e test_secrets_dev.yml > test_secrets_enc_dev.yml
  1. The file is encrypted successfully and able to decrypt using kms arn(with STS)
  2. Have unset the AWS temp sts. and created the following resources to add the encrypted secrets into AWS secret manager.

provider.tf

terraform {
  required_providers {
    sops = {
      source = "carlpett/sops"
      version = "~> 0.5"
    }
  }
}

data file:

data "sops_file" "secrets" {
  source_file = " test_secrets_enc_dev.yml"
}

which is throwing the following error.

Error: Failed to get the data key required to decrypt the SOPS file.

Group 0: FAILED
  <kms arn>: FAILED
    - | Error decrypting key: AccessDeniedException: The ciphertext
      | refers to a customer master key that does not exist, does
      | not exist in this region, or you are not allowed to access.
      | 	status code: 400, request id:
      | 0fc4254a-adc2-4975-ba3d-2de560b82a63

Recovery failed because no master key was able to decrypt the file. In
order for SOPS to recover the file, at least one key has to be successful,
but none were.

  on secrets.tf line 79, in data "sops_file" "secrets":
  79: data "sops_file" "secrets" {


Releasing state lock. This may take a few moments...

I've tried to add the kms_key_id/ key_id into it, but there is no options to do the same.

data "sops_file" "secrets" {
  source_file = "test_secrets_enc_dev.yml"
  kms_key_id = data.aws_kms_alias.cross_account_kms_alias.id
}

error:


Error: Unsupported argument

  on secrets.tf line 82, in data "sops_file" "secrets":
  82:   kms_key_id = data.aws_kms_alias.cross_account_kms_alias.id

An argument named "kms_key_id" is not expected here.

Is there a way to pass the kms_key_id to decrypt the secret file using data resources?

"Error getting data key: 0 successful groups required, got 0" - GitHub Actions

Hi All,

Seem to be running into this issue when running through GitHub Actions.

Locally this sops provider works great, gcloud auth application-default impersonates a service account with 'owner' permissions and get no errors on terraform apply.

In GA, we are using Workload Identify Federation which also impersonates the same service account, yet we get this error on the sops step to decrypt our env file. If I comment out the sops module then it builds fine so workload identify federation is working.

I can provide code if required but from a high level is there anything I'm missing? Does this provider not work with WLIF?

Missing license

Please add a license.

It would also be useful to have something that we can run in an unimportant Amazon account to test that this works.

Similarly, some kind of intention of whether you actually want (or recommend anyone else to use this would also be useful).

cannot use non-json raw file

Terraform: 0.14.5
SOPS plugin: 0.5.3

I'm having trouble with using a raw file that is not JSON:

Template:

provider "sops" {}

data "sops_file" "dbauth" {
    source_file = "dbauth.env"
    input_type = "raw"
}

output "dbauth" {
    value = data.sops_file.dbauth.raw
}

Decrypted content of dbauth.env:

#
# Don't use quotes around values!
#
db-password=myPassword

When running terraform plan, I get:

10:23 $ terraform plan

Error: Error unmarshalling input json: invalid character '#' looking for beginning of value

  on sops.tf line 3, in data "sops_file" "dbauth":
   3: data "sops_file" "dbauth" {

Am I missing out on anything? Is this supposed to work with arbitrary raw data?

Provider doesn't seem to recognize `yaml` files

I tried to use this provider to decrypt a yaml file encrypted by sops, but received this error:

Error: Error unmarshalling input json: invalid character 'g' looking for beginning of value.

  on terraform/modules/k8-cluster-apps/cluster-apps.tf line 24, in data "sops_file" "prometheus_operator_secrets":
  24: data "sops_file" "prometheus_operator_secrets" {

It looks like the provider is trying to interpret the file as a json, even though it's a yaml. g is the first letter in the file, by the way.

Here's the terraform section referenced by that error message:

data "sops_file" "prometheus_operator_secrets" {
  source_file = "${local.helm_values_dir}/azure/prometheus-operator/secrets.yaml"
  input_type  = "raw"
}

Unable (difficult) to use with GCP's KMS on Terraform Cloud

Hello,

I was interested to use the sops provider on Terraform Cloud to decrypt secrets encrypted via GCP's KMS, but it's not possible to have it work correctly at the moment:

  • sops uses the Google Cloud SDK, which reads its configuration from the GOOGLE_APPLICATION_CREDENTIALS environment variable.
  • The GOOGLE_APPLICATION_CREDENTIALS variable must contain the path to a service account private key file
  • In Terraform Cloud, we can setup environment variables, but we can't write files directly in the workspace.
  • So the sops provider (sops actually) is never able to find the credentials file and to authenticate against GCP :(

I understand this is more a sops / Google Cloud SDK issue, rather than the sops provider, but since this mostly affects restricted environment where we'd like to run Terraform, I wanted to raise this here.

There are a few possible alternatives at the moment:

  • Commit the credentials file directly into the repository used by Terraform Cloud and export GOOGLE_APPLICATION_CREDENTIALS to point to this local file. This is, for obvious reason, a huge no-no :)

  • I haven't tried it, but it might be possible to work around the limitation with local_file, the credentials in a Terraform Cloud variable and a dependency between the local_file and the data source. Something like this:

    # Considering the following Terrafom Cloud variables:
    #
    # * var.google_credentials: the content of the credentials file, as a Terraform
    #   Cloud HCL secret
    #
    # * `GOOGLE_APPLICATION_CREDENTIALS`: a Terraform Cloud environment variable
    #   containing a path name, like `gcp-creds.json`
    
    resource "local_file" "google_credentials" {
      filename          = "gcp-creds.json" # Must be the same as $GOOGLE_APPLICATION_CREDENTIALS
      content           = var.google_credentials
      sensitive_content = true
    }
    
    provider "sops" {}
    
    data "sops_file" "secrets" {
      source_file = "secrets.yaml"
      depends_on  = [local_file.google_credentials]
    }

    I haven't tried: it's a bit ugly, but that may work is sops initialization is
    lazy :)

  • Support initializing sops with an alternative variable, like the Terraform provider itself is doing using GOOGLE_CREDENTIALS.

    Without some help from sops, I'm not sure if that's doable though.

Any other idea on how to use the sops provider with Terraform Cloud and GCP?

Using AWS KMS in Terraform Cloud

Hello, I’m facing an issue currently where I get the following error

Error: Error getting data key: 0 successful groups required, got 0 
with data.sops_file.infra-secrets
on main.tf line 5, in data “sops_file” “infra-secrets”:
data “sops_file” “infra-secrets” {

This appears to be tied to a lack of AWS permissions to use the key. I’m not sure what I’m missing or need to do to make this function. I’ve tried the following:

  1. Declare an AWS Provider with the appropriate configuration to assume directly in to the role:
provider “aws” {
   assume_role {
    role_arn     = var.role_arn
    session_name = “Terraform”
    external_id  = var.external_id
  }
  access_key = var.access_key
  secret_key = var.secret_key
  region     = var.aws_region
}

variable “access_key” {}
variable “secret_key” {}
variable “aws_region” {}
variable "role_arn" {}
variable "external_id" {}

With the variables assigned via the terraform cloud UI, this pattern works for our other use cases. In addition, because I was concerned about a provider initialization order issue, I put this AWS provider above sops, and put in the following to ensure that AWS was fully functional before Sops attempted to decrypt:

data “aws_caller_identity” “current” {}

data “sops_file” “infra-secrets” {
  source_file = “infra-secrets.yaml”
  depends_on = [
    data.aws_caller_identity.current,
  ]
}
  1. Alternatively use the terraform cloud variable manager to set environment variiables:
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_DEFAULT_REGION=...

The user here has the permission for the role specified in the associated sops file:

sops:
    kms:
    -   arn: arn:aws:kms:us-west-2:{account-id}:key/{key-id}
        created_at: ‘...’
        enc: ...
        aws_profile: “arn:aws:iam::{account-id}:role/{role-id}”

These two methods did not resolve the issue, so I feel like I'm either missing something foundational, or there is unexpected behaviour here.

My theories are around the way the provider (via sops, via the aws SDK) is acting is somehow misbehaving, but I admit that my attempts to follow the code and debug stalled out a bit in the middle of the sops decrypt package.

Register as an official provider/plugin

Hi,

We're finding this incredibly useful, thank you for writing it!
Are there any plans to have this submitted as an "official" provider/plugin?

It would be preferable if there wasn't a separate install step required, and if Terraform could install it alongside all the other providers

Create SOPS file with TF generated secrets

Is there any way to produce a SOPS encrypted file from Terraform?

I can certainly do this by using local_file module/resource and call SOPS just after the Terraform has dumped this in plaintext. It would be much more convenient to do it directly, given we already have this SOPS provider.

Migration Steps from 0.11 provider blocks to 0.12.20+/0.13 required blocks

Just went through this so its not really an issue, more of a migration guide. If people think it should be added to a troubleshooting README, I can create a pull request.

Originally, we had been using the sops provider via wget to local ~/.terraform.d/plugins/
then calling

provider "sops" {}

With terraform 0.12.20 and in 0.13 you can call it via a required_providers block (Yay!)

    sops = {
      source = "carlpett/sops"
      version = "0.5.1"
    }

What happens when you change to this is that it creates a new provider then the one that was used previously.

- Finding carlpett/sops versions matching "0.5.1"...
- Installing carlpett/sops v0.5.1...
- Installed carlpett/sops v0.5.1 (self-signed, key ID 1468AC14E6819667)

But if you have used the sops provider previously, after the upgrade you will get a error:

- Finding latest version of -/sops...

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider -/sops:
provider registry registry.terraform.io does not have a provider named
registry.terraform.io/-/sops

That is because, in the state file the original provider still exists

terraform providers
Providers required by configuration:
.

└── provider[github.com/carlpett/sops] 0.5.1


Providers required by state:

    provider[registry.terraform.io/-/sops]

To update this you must remove the data source from your terraform.state file.

Once this has been done and the init has been re-ran you are now using the new provider and can proceed.

Thanks!

Version Pinning

Just a suggestion to get a version scheme going to prevent breaking changes in the future:

provider "sops" {
  version = "~> 1.0"
}

Question: Where does sops_file get credentials for azure_keyvault

Where does sops_file get Azure credentials when accessing Azure Key Vault?

My files are encruyped using this .sops.yaml. Will is use same credentails for all 3 Key Vaults?

creation_rules:
  - path_regex: \.(json|yaml)$
    key_groups:
      - azure_keyvault:

          - vaultUrl: 'https://vault-one.vault.azure.net'
            key: 'sops-key'
            version: 'xxxx'

          - vaultUrl: 'https://vault-two.vault.azure.net'
            key: 'sops-key'
            version: 'xxxx'

          - vaultUrl: 'https://vault-three.vault.azure.net'
            key: 'sops-key'
            version: 'xxxx'

Yaml lists

Does this provider support lists in yaml? I can't seem to get it to work, terraform claims that the data resource "does not have attribute" for the yaml list item.

example:

list_of_things:
- one
- two
data "sops_file" "file" {
  source_file = "file.enc.yaml"
}
locals {
  list_of_things = "${data.sops_file.file.data.list_of_things}"
}

Obscure error when using KMS alias with AWS

Hi all,

I tried to use a SOPs file with a KMS alias with your provider but I received the following obscuring error.

Error: Error getting data key: 0 successful groups required, got 0

I tried the same sops file with the original key and with the alias.
The cli works correctly in both cases but terraform provider fails when the alias is declared.

How to reproduce?

data "sops_file" "secret_file" {
  source_file = "sops/key-dev.json"
}

and a sops file with key/alias declared.

Versions

$ terraform version
Terraform v0.12.20
+ provider.aws v2.49.0
+ provider.sops (unversioned) <-- v0.3.2

On MacOS Catalina 10.15.3

Terraform Cloud usage with PGP keys always shows it will change the values, but it doesn't which is confusing.

Hi, I'm using this module in Terraform Cloud. I added binary under the terraform.d/plugins/linux_amd64/terraform-provider-sops_v0.5.1 path.

Terraform version: v0.12.29
SOPS Provider version: v0.5.1

The plan always shows as the values will be changed, but after an apply, I can see that the values are not actually changed (which was what I expected) initially.
This makes things very confusing, because you don't know what will be changed beforehand (all the places where the SOPS data sources are used will show like candidates for changing during plan) but they won't actually be changed if the values haven't. (which you can only see after the apply).
This happens becuase of this:

In particular, the depends_on meta-argument is also available within data blocks, with the same meaning and syntax as in resource blocks.

However, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using depends_on with data resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses depends_on with a data resource can never converge.

Due to this behaviour, we do not recommend using depends_on with data resources.

Is there another (better) way to import the private PGP key, or any other suggestions on how this could be fixed?

In this case, the cloud Provider is Yandex, so we're using PGP for the encryption of secrets using SOPS.

To be able to to import the private PGP key in Terraform Cloud, I defined a resource like this:

resource "local_file" "gpg_import" {
  sensitive_content = base64decode(var.my_sops_private_key_b64_secret)
  filename = "${path.root}/my_sops_pgp_private_key.asc"
  provisioner "local-exec" {
    command = <<EOH
gpg --import ${path.root}/my_sops_pgp_private_key.asc
EOH
  }
}

And I read the secret encrypted files like this:


data "sops_file" "yandex_secrets_sops" {
  source_file = "secrets/yandex_secrets.enc.yaml"
  depends_on = [local_file.gpg_import]
}

and I use one of those values in the secret file, to define a password for the database like this:


module "rc_db" {
  source  = "app.terraform.io/MY_POSTGRESQL_YANDEX_MODULE"
  version = "0.1.0"
  db_cluster_name     = "some-val"
  db_vpc_id           = "some-val"
  db_subnet_id        = "some-val"
  db_name             = "some-val"
  db_owner            = "db_owner"
  db_password         = data.sops_file.yandex_secrets_sops.data["yandex_db_password"]
  db_assign_public_ip = true
}

The problem is anytime I do a plan:
• it shows me that it will create the new resource (local_file.gpg_import which is expected) as each TF cloud run is on a different worker
• it shows that it will update the data source data.sops_file.yandex_secrets_sops
• it also shows that it will change the resource for the data base in place, but the values haven't changed and that's not correct.

  # data.sops_file.yandex_secrets_sops will be read during apply
  # (config refers to values not yet known)
 <= data "sops_file" "yandex_secrets_sops"  {
      + data        = (known after apply)
      + id          = (known after apply)
      + raw         = (known after apply)
      + source_file = "secrets/yandex_secrets.enc.yaml"
    }

  # local_file.gpg_import will be created
  + resource "local_file" "gpg_import" {
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./my_sops_pgp_private_key.asc"
      + id                   = (known after apply)
      + sensitive_content    = (sensitive value)
    }


------------------ Omitted ------------------ 
          - user {
          - grants   = [] -> null
          - login    = true -> null
          - name     = "db_owner" -> null
          - password = (sensitive value)

          - permission {
              - database_name = "some-val" -> null
            }
        }
      + user {
          + grants   = []
          + login    = true
          + name     = "db_owner"
          + password = (sensitive value)

          + permission {
              + database_name = "some-val"
            }
        }

But when I do an apply, the only resource that gets changed is the local_file.gpg_import.
Which is very confusing, as you don't know what the changes will be before the actual run.

Output screwed with passphrase input dialog

When GPG prompt for passphrase, terraform keeps pushing out the state change in screen which screwed with the dialog.

Is it possible to pause on the dialog?

                                                                                      ┌────────────────────────────────────────────────────────────────┐
                                                                                      │ Please enter the passphrase to unlock the OpenPGP secret key:  │
                                                                                      │ "Yujun Zhang <*>"                              │
                                                                                      │ 4096-bit RSA key, ID *1E54********C60C,                         │
                                                                                      │ created 2018-08-16 (main key ID 208C********D205).             │
                                                                                      │                                                                │
                                                                                      │                                                                │
                                                                                      │ Passphrase: aws_s3_bucket.data: Refreshing state... (ID: datal)
                                                                                      │                                                                │                                   aws_db_parameter_group.this: Refreshing state... (ID: rds)                                                            │         <OK>                                    <Cancel>       │
                          aws_iam_role.monitoring: Refreshing state... (ID: monitoring.rds)─────────────────────────────────────────┘
                                                                                                              aws_db_option_group.this: Refreshing state... (ID: rds)
                                                                                                                                                                                        data.aws_security_group.bastion: Refreshing state...
                                                                                                                                                                                                                                            da

Use sops for state encryption

Given sops wonderful usage for sharing secrets across teams, would it be possible to use this plugin to encrypt terraforms state "locally" in a way that could be shared more trivially via git, etc?

If this requires work in terraform, let me know, I'll file a feature request there too.

Thanks for this provider!

"Error getting data key: 0 successful groups required, got 0" during Atlantis plan

Hi!

During terraform plan I'm getting the following error:

Error: Error getting data key: 0 successful groups required, got 0

  on atlantis.tf line 1, in data "sops_file" "secrets":
   1: data "sops_file" "secrets" {

I tried different terraform versions, no results. Executing this locally works properly, plan only fails only in the container.

Environment:

  • Provider: GCP
  • Host: GKE
  • Atlantis: v0.17.0 + Google SDK + Sops installed on container
  • Terraform versions: 0.13.7 and 1.0.5 both tested, same error
  • sops provider: 0.6.3

atlantis.tf

data "sops_file" "secrets" {
  source_file = "atlantis_secrets.sops.yaml"
}


resource "helm_release" "atlantis" {
  name             = "atlantis"
  repository       = "https://runatlantis.github.io/helm-charts"
  chart            = "atlantis"
  namespace        = "atlantis"
  create_namespace = true
  version          = "3.14.0"
  wait             = false

  values = [
    file("atlantis_values.yaml")
  ]

[shrinked, github tokens are here]
}

sopsed file, data/tokens redacted, in the file I have GH related secrets

user: ENC[AES256_GCM,data:...,tag:...,type:str]
token: ENC[AES256_GCM,data:...,iv:...,tag:...,type:str]
secret: ENC[AES256_GCM,data:...,iv:...,tag:...,type:str]
sops:
    kms: []
    gcp_kms:
        - resource_id: projects/.../locations/global/keyRings/.../cryptoKeys/...-key
          created_at: "2021-08-05T15:18:30Z"
          enc: ...

custom values.yaml

environmentRaw: 
  - name: GOOGLE_APPLICATION_CREDENTIALS
    value: /var/secrets/atlantis-gcp-secret/credentials.json
  - name: KUBECONFIG
    value: /home/atlantis/.kube/config
  - name: ATLANTIS_REPO_CONFIG_JSON
    value: |
      {"repos":[{"id":"/.*/","pre_workflow_hooks":[{"run":"gcloud auth activate-service-account [email protected] \\\n  --key-file=/var/secrets/atlantis-gcp-secret/credentials.json"}]}]}

SA Activation was needed in order to have working helm provider at GKE.

Atlantis has Editor permissions in GCP, so it has R/W access to all resources.

Dockerfile:

FROM runatlantis/atlantis:v0.17.0 

RUN apk add python3 wget vim 
# Downloading and installing gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz \
  && mkdir -p /usr/local/gcloud \
  && tar -C /usr/local/gcloud -xf /tmp/google-cloud-sdk.tar.gz \
  && /usr/local/gcloud/google-cloud-sdk/install.sh \
  && rm -rf /tmp/google-cloud-sdk.tar.gz \ 
  && curl -L https://github.com/mozilla/sops/releases/download/v3.7.1/sops-v3.7.1.linux > /tmp/sops \
  && mv /tmp/sops /usr/local/bin/sops && chmod +x /usr/local/bin/sops

# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin

COPY atlantis/global/kubeconfig /home/atlantis/.kube/config

Do you have any idea if it's related to sops or the provider itself, or how I can debug this further?

Logs with DEBUG level didn't show anything really helpful.

Cheers

Docs: Specify that sops will consume credentials as per binary and not from provider

I've ended up having to do this to make the decrypt work in Azure Devops with an ARM Service Connection. Am I missing something? Shouldn't the provider be supplying the relevant details?

Thanks in advance for any attention.

   - job: terraform_apply
      displayName: 'Terraform apply'
      steps:
      - checkout: self
        clean: true
      - task: AzureCLI@2
        inputs:
          azureSubscription: <name>'
          scriptType: 'bash'
          scriptLocation: 'inlineScript'
          addSpnToEnvironment: true
          failOnStandardError: true
          inlineScript: |
            echo "##vso[task.setvariable variable=ARM_CLIENT_ID;issecret=true]$servicePrincipalId"
            echo "##vso[task.setvariable variable=ARM_CLIENT_SECRET;issecret=true]$servicePrincipalKey"
            echo "##vso[task.setvariable variable=ARM_TENANT_ID;issecret=true]$tenantId"
      - task: TerraformInstaller@0
        displayName: 'Install terraform (version: ${{ variables.terraformVersion }})'
        inputs:
          terraformVersion: ${{ variables.terraformVersion }}
      - task: TerraformCLI@0
        displayName: Terraform init
        name: terraform_init
        inputs:
          command: 'init'
          backendType: 'azurerm'
          backendServiceArm: '<name>'
          workingDirectory: '$(System.DefaultWorkingDirectory)/$(terraformEnvBasePath)'
          allowTelemetryCollection: false
      - task: TerraformCLI@0
        displayName: 'Terraform apply'
        name: terraform_apply
        env: 
          AZURE_CLIENT_ID: $(ARM_CLIENT_ID)
          AZURE_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
          AZURE_TENANT_ID: $(ARM_TENANT_ID)
          AZURE_AUTH_METHOD: "clientcredentials"
        inputs:
          command: 'apply'
          workingDirectory: '$(System.DefaultWorkingDirectory)/$(terraformEnvBasePath)'
          environmentServiceName: '<name>'
          allowTelemetryCollection: false
      - task: Bash@3
        condition: always()
        displayName: 'Clear environment variables'
        inputs:
          targetType: 'inline'
          script: |
            clear="clear"
            echo "##vso[task.setvariable variable=ARM_CLIENT_ID;issecret=true]$clear"
            echo "##vso[task.setvariable variable=ARM_CLIENT_SECRET;issecret=true]$clear"
            echo "##vso[task.setvariable variable=ARM_TENANT_ID;issecret=true]$clear"

Add the ability to support raw data

Sops supports encrypting raw string data (i.e not yaml or json formatted) so it would be useful to be able to use this provider to decrypt that.

Do we still need a blank sops provider?

provider "sops" {
  # left blank on purpose
}

We are in the process of upgrading from Terraform 0.12 to 0.15. Do we still need this blank line? Doesn't seem like it.

Decrypting binaries

Not getting desired results when decrypting binary files.

creating a binary file and encrypting it

dd if=/dev/urandom of=randomfile bs=1024 count=2
sops -e --azure-kv $SOPS_KEY randomfile > encfile
sha512sum randomfile
db61e16c228c9d7d4a18832310cef488201ab0675fa641e78f2fa4be2f3ca13798ef2810fa79b691db0e78ee04f18007f6965baab9a4c5068f03102fd5c184c0  randomfile

In terraform:

data "sops_file" "test" {
  source_file = "encfile"
  input_type  = "raw"
}

resource "local_file" "randomfile" {
  content  = data.sops_file.test.raw
  filename = "decodedfile"
}

file decoded and written out to file

72a5d72f08e5deda71c8e137b24b9df8394e6396af55de69f1bb768eb5220c00ac721d3f3e62c0d9e22e50e0745562dd2cbc15c744672c800129145517097de6  decodedfile

Credentials are supplied by using the environment vars AZURE_TENANT_ID, AZURE_CLIENT_ID,AZURE_CLIENT_SECRET;
SOPS_KEY is the full url to the secret used to encrypt the file.

Using sops manually results in the correct decryption

sops -d --extract '["data"]' --azure-kv $SOPS_KEY encfile > dencfile
db61e16c228c9d7d4a18832310cef488201ab0675fa641e78f2fa4be2f3ca13798ef2810fa79b691db0e78ee04f18007f6965baab9a4c5068f03102fd5c184c0  dencfile

Plaintext secrets in Terraform state

If you take a look at the Terraform state produced by the example in the README, you can see the plaintext secrets:

{
  "version": 4,
  "terraform_version": "0.12.10",
  "serial": 1,
  "lineage": "4d2fc9e5-b6d1-9117-2a6a-419a2696783a",
  "outputs": {
    "do-something": {
      "value": "foo",
      "type": "string"
    },
    "do-something2": {
      "value": "bar",
      "type": "string"
    }
  },
  "resources": [
    {
      "mode": "data",
      "type": "sops_file",
      "name": "demo-secret",
      "provider": "provider.sops",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "data": {
              "db.password": "bar",
              "password": "foo"
            },
            "id": "-",
            "input_type": null,
            "raw": null,
            "source_file": "demo-secret.enc.json"
          }
        }
      ]
    }
  ]
}

This, in and of itself, is not abnormal for a Terraform provider that handles secrets; however, it caught a coworker of mine off-guard.

If it's not possible to store ciphertext/hashes/whatever in the state, can we at least put a big disclaimer somewhere that this is the case and heavily recommend using an encrypted backend state store?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.