Giter Club home page Giter Club logo

terraform-k8s's Introduction

๐Ÿ“ฃ Terraform Cloud Operator v2 ๐Ÿ“ฃ

We are excited to announce the release of the latest version of the Terraform Cloud Operator for Kubernetes, now available at hashicorp/terraform-cloud-operator. This update introduces a more streamlined set of CRDs, along with new features such as Agent Pools, the capability to deploy and scale agents, Run Tasks, and Workspace Notifications. To learn more about these enhancements and to get started, please checkout our announcement blog

Terraform logo

Terraform Cloud Operator for Kubernetes (v1)

The Terraform Cloud Operator for Kubernetes provides first-class integration between Kubernetes and Terraform Cloud by extending the Kubernetes control plane to enable lifecycle management of cloud and on-prem infrastructure through Kubernetes manifests. Manifests can be deployed and managed using kubectl, Terraform, Gitops tools, or any other tool that allows you to manage Kubernetes custom resources.

This operator provides a unified way to manage a Kubernetes application and its infrastructure dependencies through a single Kubernetes CustomResourceDefinition (CRD). After the infrastructure dependencies are created, pertinent information such as endpoints and credentials are returned from Terraform Cloud to Kubernetes.

Use Case

  • Manage the lifecycle of cloud and on-prem infrastructure through a single Kubernetes custom resource
    • Install the operator from the corresponding Helm Chart to enable the management of infrastructure services from any Kubernetes cluster.
    • Provision and manage infrastructure from any provider, such as AWS, Azure, GCP, and any of the hundreds of other Terraform providers, to use them with your existing application configurations, through Terraform Cloud or Terraform Enterprise.
    • Deploy and Manage your Kubernetes and infrastructure resources in a single git repository, separate git repositories, or directly from a module in the Terraform Registry, to match your existing operating model.
    • Provide governance for your infrastructure resources using policy-as-code with OPA Gatekeeper and HashiCorp Sentinel.

You can read more about this project and its potential use cases on our blog.

Terraform also enables you to create and publish custom infrastructure providers through the Terraform SDK. Once you create a new Terraform provider, publish it to the Terraform Registry and then you can consume it with the operator.

Join us in the #terraform-providers channel on the Kubernetes Slack to discuss this, and other Terraform and Kubernetes projects (Sign up here).

Note: This project is versioned separately from Terraform. Supported Terraform versions must be version 0.12 or above. By versioning this project separately, we can iterate on Kubernetes integrations more quickly and release new versions without forcing Terraform users to do a full Terraform upgrade.

We take Terraform's security and our users' trust very seriously. If you believe you have found a security issue in the Terraform Cloud Operator for Kubernetes, please responsibly disclose by contacting us at [email protected].

Installation and Configuration

Namespace

Create the namespace where you will deploy the operator, Secrets, and Workspace resources.

$ kubectl create ns $NAMESPACE

Authentication

The operator must authenticate to Terraform Cloud. Note that the operator must run within the cluster, which means that it already handles Kubernetes authentication.

  1. Generate a Terraform Cloud Team API token at https://app.terraform.io/app/$ORGANIZATION/settings/teams, where $ORGANIZATION is your organization name.

  2. Create a file for storing the API token and open it in a text editor.

  3. Insert the generated token ($TERRAFORM_CLOUD_API_TOKEN) into the text file formatted for Terraform credentials.

    credentials app.terraform.io {
      token = "$TERRAFORM_CLOUD_API_TOKEN"
    }
  4. Create a Kubernetes secret named terraformrc in the namespace. Reference the credentials file ($FILENAME) created in the previous step.

    $ kubectl create -n $NAMESPACE secret generic terraformrc --from-file=credentials=$FILENAME

    Ensure terraformrc is the name of the secret, as it is the default secret name defined under the Helm value syncWorkspace.terraformRC secretName in the values.yaml file.

If you have the free tier of Terraform Cloud, you will only be able to generate a token for the one team associated with your account. If you have a paid tier of Terraform Cloud, create a separate team for the operator with "Manage Workspaces" access.

Note that a Terraform Cloud Team API token is a broad-spectrum token. It allows the token holder to create workspaces and execute Terraform runs. You cannot limit the access it provides to a single workspace or role within a team. In order to support a first-class Kubernetes experience, security and access control to this token must be enforced by Kubernetes Role-Based Access Control (RBAC) policies.

Workspace Sensitive Variables

Sensitive variables in Terraform Cloud workspaces often take the form of credentials for cloud providers or API endpoints. They enable Terraform Cloud to authenticate against a provider and apply changes to infrastructure.

Create the secret for the namespace that contains all of the sensitive variables required for the workspace.

$ kubectl create -n $NAMESPACE secret generic workspacesecrets --from-literal=SECRET_KEY=$SECRET_KEY --from-literal=SECRET_KEY_2=$SECRET_KEY_2 ...

Ensure workspacesecrets is the name of the secret, as it is the default secret name defined under the Helm value syncWorkspace.sensitiveVariables.secretName in the values.yaml file.

In order to support a first-class Kubernetes experience, security and access control to these secrets must be enforced by Kubernetes Role-Based Access Control (RBAC) policies.

Terraform Version

By default, the operator will create a Terraform Cloud workspace with a pinned version of Terraform.

Override the Terraform version that will be set for the workspace by changing the Helm value syncWorkspace.terraformVersion to the Terraform version of choice.

Deploy the Operator

Use the Helm chart repository to deploy the Terraform Operator to the namespace you previously created.

$ helm repo add hashicorp https://helm.releases.hashicorp.com
$ helm search repo hashicorp/terraform
$ helm install --namespace ${NAMESPACE} hashicorp/terraform --generate-name

Create a Workspace

The Workspace CustomResource defines a Terraform Cloud workspace, including variables, Terraform module, and outputs.

Here are examples of Workspace CustomResource..

The Workspace Spec includes the following parameters:

  1. organization: The Terraform Cloud organization you would like to use.

  2. secretsMountPath: The file path defined on the operator deployment that contains the workspace's secrets.

Additional parameters are outlined below.

Modules

The Workspace will only execute Terraform configuration in a module. It will not execute *.tf files.

Information passed to the Workspace CustomResource will be rendered to a template Terraform configuration that uses the module block. Specify a module with remote source. Publicly available VCS repositories, the Terraform Registry, and private module registry are supported. In addition to source, specify a module version.

module:
  source: "hashicorp/hello/random"
  version: "3.1.0"

The above Kubernetes definition renders to the following Terraform configuration.

module "operator" {
  source = "hashicorp/hello/random"
  version = "3.1.0"
}

Variables

Variables for the workspace must equal the module's input variables. You can define Terraform variables in two ways:

  1. Inline

    variables:
      - key: hello
        value: world
        sensitive: false
        environmentVariable: false
  2. With a Kubernetes ConfigMap reference

    variables:
      - key: second_hello
        valueFrom:
          configMapKeyRef:
            name: say-hello
            key: to
        sensitive: false
        environmentVariable: false

The above Kubernetes definition renders to the following Terraform configuration.

variable "hello" {}

variable "second_hello" {}

module "operator" {
  source = "hashicorp/hello/random"
  version = "3.1.0"
  hello = var.hello
  second_hello = var.second_hello
}

The operator pushes the values of the variables to the Terraform Cloud workspace. For secrets, set sensitive to be true. The workspace sets them as write-only. Denote workspace environment variables by setting environmentVariable as true.

Sensitive variables should already be initialized as per Workspace Sensitive Variables. You can define them by setting sensitive: true. Do not define the value or use a ConfigMap reference, as the read from file will override the value you set.

variables:
  - key: AWS_SECRET_ACCESS_KEY
    sensitive: true
    environmentVariable: true

Apply an SSH key to the Workspace (optional)

SSH keys can be used to clone private modules. To apply an SSH key to the workspace, specify sshKeyID in the Workspace Custom Resource. The SSH key ID can be found in the Terraform Cloud API.

apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: $WORKSPACE
spec:
   sshKeyID: $SSHKEYID

Outputs

In order to retrieve Terraform outputs, specify the outputs section of the Workspace CustomResource. The key represents the output key you expect from terraform output and moduleOutputName denotes the module's output key name.

outputs:
  - key: my_pet
    moduleOutputName: pet

The above Kubernetes definition renders to the following Terraform configuration.

output "my_pet" {
  value = module.operator.pet
}

The values of the outputs can be consumed from two places:

  1. Kubernetes status of the workspace.
    $ kubectl describe -n $NAMESPACE workspace $WORKSPACE_NAME
  2. ConfigMap labeled $WORKSPACE_NAME-outputs. Kubernetes deployments can consume these output values.
    $ kubectl describe -n $NAMESPACE configmap $WORKSPACE_NAME-outputs

Deploy

Deploy the workspace after configuring its module, variables, and outputs.

$ kubectl apply -n $NAMESPACE -f workspace.yml

Update a Workspace

The following changes updates and executes new runs for the Terraform Cloud workspace:

  1. organization
  2. module source or version
  3. outputs
  4. Non-sensitive or ConfigMap reference variables.

Updates to sensitive variables will not trigger a new execution because sensitive variables are write-only for security purposes. The operator is unable to reconcile the upstream value of the secret with the value stored locally. Similarly, ConfigMap references do not trigger updates as the operator does not read the value for comparison.

After updating the configuration, re-deploy the workspace.

$ kubectl apply -n $NAMESPACE -f workspace.yml

Delete a Workspace

When deleting the Workspace CustomResource, the command line will wait for a few moments.

$ kubectl delete -n $NAMESPACE workspace.app.terraform.io/$WORKSPACE_NAME

This is because the operator is running a finalizer. The finalizer will execute before the workspace officially deletes in order to:

  1. Stop all runs in the workspace, including pending ones
  2. terraform destroy -auto-approve on resources in the workspace
  3. Delete the workspace.

Once the finalizer completes, Kubernetes deletes the Workspace CustomResource.

Debugging

Check the status and outputs of the workspace by examining its Kubernetes status. This provides the run ID and workspace ID to debug in the Terraform Cloud UI.

$ kubectl describe -n $NAMESPACE workspace $WORKSPACE_NAME

When workspace creation, update, or deletion fails, check errors by examining the logs of the operator.

$ kubectl logs -n $NAMESPACE $(kubectl get pods -n $NAMESPACE --selector "component=sync-workspace" -o jsonpath="{.items[0].metadata.name}")

If Terraform Cloud returns an error that the Terraform configuration is incorrect, examine the Terraform configuration at its ConfigMap.

$ kubectl describe -n $NAMESPACE configmap $WORKSPACE_NAME

Internals

Why create a namespace and secrets?

The Helm chart does not include secrets management or injection. Instead, it expects to find secrets mounted as volumes to the operator's deployment. This supports secrets management approaches in Kubernetes that use a volume mount for secrets.

In order to support a first-class Kubernetes experience, security and access control to these secrets must be enforced by Kubernetes Role-Based Access Control (RBAC) policies.

For the Terraform Cloud Team API token, the entire credentials file with the Terraform Cloud API Token is mounted to the filepath specified by TF_CLI_CONFIG_FILE. In an equivalent Kubernetes configuration, the following example creates a Kubernetes secret and mount it to the operator at the filepath specified by TF_CLI_CONFIG_FILE.

---
# not secure secrets management
apiVersion: apps/v1
kind: Secret
metadata:
  name: terraformrc
type: Opaque
data:
  credentials: |-
    credentials app.terraform.io {
      token = "$TERRAFORM_CLOUD_API_TOKEN"
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: terraform-k8s
spec:
  # some sections omitted for clarity
  template:
    metadata:
      labels:
        name: terraform-k8s
    spec:
      serviceAccountName: terraform-k8s
      containers:
        - name: terraform-k8s
          env:
            - name: TF_CLI_CONFIG_FILE
              value: "/etc/terraform/.terraformrc"
          volumeMounts:
          - name: terraformrc
            mountPath: "/etc/terraform"
            readOnly: true
      volumes:
        - name: terraformrc
          secret:
            secretName: terraformrc
            items:
            - key: credentials
              path: ".terraformrc"

Similar to the Terraform Cloud API Token, the Helm chart mounts them to the operator's deployment for use. It does not mount workspace sensitive variables to the Workspace Custom Resource. This ensures that only the operator has access to read and create sensitive variables as part of the Terraform Cloud workspace.

Examine the deployment in templates/sync-workspace-deployment.yaml. The deployment mounts a volume containing the sensitive variables. The file name is the secret's key and file contents is the secret's value. This supports secrets management approaches in Kubernetes that use a volume mount for secrets.

---
# not secure secrets management
apiVersion: apps/v1
kind: Secret
metadata:
  name: workspacesecrets
type: Opaque
data:
  AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
  GOOGLE_APPLICATION_CREDENTIALS: ${GOOGLE_APPLICATION_CREDENTIALS}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: terraform-k8s
spec:
  # some sections omitted for clarity
  template:
    metadata:
      labels:
        name: terraform-k8s
    spec:
      serviceAccountName: terraform-k8s
      containers:
        - name: terraform-k8s
          volumeMounts:
          - name: workspacesecrets
            mountPath: "/tmp/secrets"
            readOnly: true
      volumes:
        - name: workspacesecrets
          secret:
            secretName: workspacesecrets

Helm Chart

The Helm chart consists of several components. The Kubernetes configurations associated with the Helm chart are located under crds/ and templates/.

Custom Resource Definition

Helm starts by deploying the Custom Resource Definition for the Workspace. Custom Resource Definitions extend the Kubernetes API. It looks for definitions in the crds/ of the chart.

The Custom Resource Definition under crds/app.terraform.io_workspaces_crd.yaml defines that the Workspace Custom Resource schema.

Role-Based Access Control

In order to scope the operator to a namespace, Helm assigns a role and service account to the namespace. The role has access to Pods, Secrets, Services, and ConfigMaps. This configuration is located in templates/.

Namespace Scope

To ensure the operator does not have access to secrets or resource beyond the namespace, the Helm chart scopes the operator's deployment to a namespace.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: terraform-k8s
spec:
  # some sections omitted for clarity
  template:
    metadata:
      labels:
        name: terraform-k8s
    spec:
      serviceAccountName: terraform-k8s
      containers:
        - name: terraform-k8s
          command:
          - /bin/terraform-k8s
          - "--k8s-watch-namespace=$(POD_NAMESPACE)"
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace

When deploying, ensure that the namespace is passed into the --k8s-watch-namespace option. Otherwise, the operator will attempt to access across all namespaces (cluster scope).

terraform-k8s's People

Contributors

aareet avatar alexsomesan avatar arybolovlev avatar bbbmau avatar dak1n1 avatar goller avatar hashicorp-copywrite[bot] avatar hashicorp-tsccr[bot] avatar ibrandyjackson avatar joatmon08 avatar jrhouston avatar jtyr avatar kgann avatar koikonom avatar kunalvalia avatar mcfearsome avatar pdecat avatar redeux avatar rojopolis avatar sheneska avatar tsunamishaun avatar vravind1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-k8s's Issues

Looking for community feedback on the Terraform Cloud Operator for Kubernetes

Hi!

I'm Phil, the product manager for the team working on the Terraform Cloud Operator for Kubernetes provider. We're trying to determine the future of the operator, and I could use your help. If you're actively using the operator, or if you'd like to use the operator but can't, I'd like to speak with you for a few minutes to learn more about your use cases.

If that's something you're willing to do, please book time directly on my calendar.

Thanks in advance.

Non-string output crashes operator

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Adding a non-string output to a Workspace resource will cause the operator to crash.

terraform-k8s & Kubernetes Version

terraform-k8s: 0.1.0-alpha
kubernetes: v1.17.0

Affected Resource(s)

Workspace

terraform-k8s Configuration Files

apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: dns-zone-whatever-com
spec:
  organization: our-org
  secretsMountPath: "/tmp/secrets"
  module:
    source: "git::https://github.com/ryanewk/tfe-controller-terraform-modules.git//google-dns-zone"
  outputs:
  - key: zone_id
    moduleOutputName: id
  - key: name_servers
    moduleOutputName: name_servers
  variables:
  - key: project
    valueFrom:
      configMapKeyRef:
        name: dns-zone-whatever-com
        key: project
    sensitive: false
    environmentVariable: false
  - key: name
    value: whatever
    sensitive: false
    environmentVariable: false
  - key: dns_name
    value: whatever.com.
    sensitive: false
    environmentVariable: false
  - key: GOOGLE_CREDENTIALS
    sensitive: true
    environmentVariable: true
  - key: CONFIRM_DESTROY
    value: "1"
    sensitive: false
    environmentVariable: true

Debug Output

https://gist.github.com/ryanewk/3421c385de0c2a3aa8441c85d9dbc4f9

Expected Behavior

If non-string outputs are not supported, the operator should catch and log the error but it should not crash.

Actual Behavior

Operator crashes and cannot restart:

$ kubectl get pod
NAME                                                   READY   STATUS             RESTARTS   AGE
tfoperator-terraform-sync-workspace-6444c44755-dgtwc   0/1     CrashLoopBackOff   36         3h8m

Steps to Reproduce

See above Workspace resource. The presence of the name_servers output causes the operator to crash.

Important Factoids

This is a bug report but also a feature request. It would be very useful to us if the operator supported non-string outputs.

Clear up a Roadmap for first GA

Hey everyone !

This is a very promising project that I believe will help a lot of teams/companies sort the current limitation of creating/reconciling infrastructure from K8s. We saw AWS/GCP/Azure creating Service brokers in OSBA style to put behind things such as Service catalog (https://svc-cat.io/) but that is not going that well (complex and low maintenance). I looked into rancher terraform controller but that doesn't seem to be moving (https://github.com/rancher/terraform-controller). I thought of starting something myself but them I hit this repo.

What is the current roadmap for the first Beta/Stable/GA releases ?

I'm trying to see what are the major gaps and a way to contribute to make this move faster :)

Thanks again for the great things put in place here !

Add Terraform Agent Execution mode as option in workspace CRD.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Hi, first I'd like to say that this operator is quite cool.
Today it seems only possible to leverage workspace remote execution when syncing workspace to Terraform Cloud. We need to interact with backed systems that are hosted on-premises and not exposable to the internet. This means using Terraform Cloud agent and configure the workspace CRD accordingly. So our ask is to add this option in the CRD.

Nic

Errors on no output

Getting an error when not setting an output on a workspace resource. Looks like it is optional according to the CRD, but the operator isn't handling the absence of outputs

validation failure list:\nspec.outputs in body must be of type array: \"null\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/rosemarywang/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/Users/rosemarywang/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/rosemarywang/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/Users/rosemarywang/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/rosemarywang/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/rosemarywang/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/rosemarywang/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}

Sensitive Variable do not update

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

operator: 0.1.1-alpha
kubernetes: Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.12-eks-eb1860", GitCommit:"eb1860579253bb5bf83a5a03eb0330307ae26d18", GitTreeState:"clean", BuildDate:"2019-12-23T08:58:45Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

Affected Resource(s)

It appears as if changes to sensitive variables are not applied when a workspace run is triggered by an edit (version bump). I have verified that the expected values are in the pod at the time of the run but a blank (previous) value is still being used in the subsequent run. Only after deleting the workspace and letting it be recreated, with no other changes (same pod running, no restarts), are the new values for these variables being applied.

Terraform Configuration Files

not relevant

Debug Output

not relevant

Expected Behavior

I expect the variable values to be updated when the secret is changed

Actual Behavior

The values are not updated

Steps to Reproduce

  1. Create a workspace using sensitive variables
  2. Let workspace plan/apply
  3. Change the value in the workspace secret being used
  4. Restart the deployment so the changes are picked up by the pod (this is a separate but related issue)
  5. Update workspace so a run is triggered
  6. Observe the value not changing (I used a null resource to echo the value for testing)

Important Factoids

References

Support workspace notifications

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Terraform Cloud can send notifications on state change events, however these are configured per workspace. Since the Operator "owns" the workspace it must be responsible for creating Notification Configurations

Potential Workspace Configuration

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
---
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: greetings
spec:
  organization: hashicorp-team-demo
  secretsMountPath: "/tmp/secrets"
  notifications:
    - name: "Notify Users"
       enabled: true
       destination-type: "slack"
       token: "1234abcd"
       triggers:
         - "run:errored"
         - "run:needs_attention"
       url: "https://webhook.url"
       users:
         - [email protected]
  module:
    source: "terraform-aws-modules/sqs/aws"
    version: "2.0.0"
  outputs:
    - key: url
      moduleOutputName: this_sqs_queue_id
    - key: arn
      moduleOutputName: this_sqs_queue_arn
  variables:
    - key: name
      value: greetings.fifo
      sensitive: false
      environmentVariable: false
    - key: fifo_queue
      value: "true"
      sensitive: false
      environmentVariable: false
    - key: AWS_DEFAULT_REGION
      valueFrom:
        configMapKeyRef:
          name: aws-configuration
          key: region
      sensitive: false
      environmentVariable: true
    - key: AWS_ACCESS_KEY_ID
      sensitive: true
      environmentVariable: true
    - key: AWS_SECRET_ACCESS_KEY
      sensitive: true
      environmentVariable: true
    - key: CONFIRM_DESTROY
      value: "1"
      sensitive: false
      environmentVariable: true

References

Support SSH Key assignment in created workspace

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Terraform Cloud will accept a reference to an SSH key for access to private module repos however, the operator doesn't currently expose this as a configuration option.

Potential Terraform Configuration

---
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: bucket
spec:
  sshKeyId: "abc-123"
  organization: parkside_securities
  secretsMountPath: "/tmp/secrets"
  module:
    source: "github.com/rojopolis/terraform-aws-s3-bucket.git"
  outputs:
    - key: bucket
      moduleOutputName: bucket
  variables:
    - key: bucket
      value: entertainment-720-3399
      sensitive: false
      environmentVariable: false

References

https://github.com/hashicorp/go-tfe/blob/7a20a8c2dbd17a5731c35b452b1ac756ad124157/workspace.go#L589

List this project on operatorhub

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

This project would reach more people if it was listed on operatorhub.io

References

https://operatorhub.io/

Update TFC HCL Flag for Variables

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

terraform-k8s: v0.1.2-alpha
kubernetes: 1.16.6

Affected Resource(s)

variables, specifically hcl: true or hcl: false.

Terraform Configuration Files

# Copy-paste your Terraform configuration from the operator here.
# To retrieve the configuration, use `kubectl -n $NAMESPACE describe configmap $WORKSPACE_NAME`
---
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: salutations
spec:
  organization: hashicorp-team-demo
  secretsMountPath: "/tmp/secrets"
  module:
    source: "git::https://github.com/joatmon08/queues.git"
  outputs:
    - key: queue_urls
      moduleOutputName: queue_urls
    - key: json_string
      moduleOutputName: json_string
  variables:
    - key: application
      value: goodbye
      sensitive: false
      environmentVariable: false
    - key: environment
      value: preprod
      sensitive: false
      environmentVariable: false
    - key: some_list
      value: '["hello","queues"]'
      sensitive: false
      environmentVariable: false
    - key: some_object
      value: '{hello={name= "test"}}'
      sensitive: false
      environmentVariable: false
    - key: AWS_DEFAULT_REGION
      valueFrom:
        configMapKeyRef:
          name: aws-configuration
          key: region
      sensitive: false
      environmentVariable: true
    - key: AWS_ACCESS_KEY_ID
      sensitive: true
      environmentVariable: true
    - key: AWS_SECRET_ACCESS_KEY
      sensitive: true
      environmentVariable: true
    - key: CONFIRM_DESTROY
      value: "1"
      sensitive: false
      environmentVariable: true

Debug Output

N/A

Expected Behavior

Above configuration errors. Add hcl:true to variables and it should update HCL flag in TFC.

Actual Behavior

Variables do not have HCL flag in TFC when updated.

Steps to Reproduce

  1. Apply workspace.
  2. Update the configuration to properly reflect HCL types for some_object and some_list.
  3. Reapply workspace.
  4. Workspace fails because the HCL flag is not enabled on the variable.

Important Factoids

References

  • #0000

Output values are double quoted

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

hashicorp/terraform-k8s:0.1.2-alpha

Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.12-eks-eb1860", GitCommit:"eb1860579253bb5bf83a5a03eb0330307ae26d18", GitTreeState:"clean", BuildDate:"2019-12-23T08:58:45Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

Affected Resource(s)

OutputStatus

Terraform Configuration Files

  outputs:
  - key: aws_access_key_id
    moduleOutputName: aws_access_key_id
  - key: aws_secret_access_key
    moduleOutputName: aws_secret_access_key

Debug Output

data:
  aws_access_key_id: '"REDACTED"'
  aws_secret_access_key: '"REDACTED"'

Expected Behavior

Expect the values to be quoted normally

Actual Behavior

it looks like convertValueToString is adding double quotes " and then it is getting quoted again when output as yaml probably

Steps to Reproduce

Run a workspace with a string output

Not enough permission In Org token

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

v1.0.0

Affected Resource(s)

Terraform Configuration Files

# Copy-paste your Terraform configuration from the operator here.
# To retrieve the configuration, use `kubectl -n $NAMESPACE describe configmap $WORKSPACE_NAME`

Debug Output

2021-03-19T17:53:11.033Z	ERROR	controller	Reconciler error	{"controller": "workspace-controller", "name": "tf-data", "namespace": "yarblar", "error": "Error while assigning ssh key to workspace: resource not found"}

Expected Behavior

Org token should provide enough permission for operator to run. There doesn't appear to be much granularity for resources like sshkey and agent pool but if there was this routine would only need list. Additionally this might be an issue for upstream tfc, wanted to get some advice on this?

Actual Behavior

Org tokens (at least in my findings) lack the permission to list SSH Keys.

Steps to Reproduce

Run the operator with an Org token

Important Factoids

References

https://github.com/hashicorp/terraform-k8s/blob/master/workspacehelper/tfc_org.go#L205

Terraform Cloud Operator doesn't support azure devops vcs provider

terraform-k8s & Kubernetes Version

terraform-k8s: 1.0.0
kubernetes-version: 18.9.9

Affected Resource(s)

Terraform Cloud Workspace Resource:

apiVersion: app.terraform.io/v1alpha1
kind: Workspace

Terraform Cloud Workspace Configuration File

apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: kubeopsskills-dev-dns
  namespace: "terraform-cloud"
spec:
  organization: "kubeopsskills"
  secretsMountPath: "/tmp/secrets"
  module:
    source: "[email protected]:v3/kubeopsskills/IAC/huaweicloud-dns-record-set-module"
  vcs:
   token_id: "ot-wU7dcYP2JioAeMxc"
   repo_identifier: "kubeopsskills/huaweicloud-dns-record-set-module"
   ingress_submodules: false
  variables:
    - key: region
      value: "ap-southeast-2"
      sensitive: false
      environmentVariable: false
    - key: zone_id
      value: "8aace3ba76c2ccea018a193ad9325639"
      sensitive: false
      environmentVariable: false
    - key: name
      value: "test.kubeops.guru."
      sensitive: false
      environmentVariable: false
    - key: type
      value: "A"
      sensitive: false
      environmentVariable: false
    - key: records
      value: '["10.0.0.1"]'
      hcl: true
      sensitive: false
      environmentVariable: false
    - key: ttl
      value: "300"
      sensitive: false
      environmentVariable: false
    - key: description
      value: "KubeOps Dev Domain"
      sensitive: false
      environmentVariable: false
    - key: HW_REGION_NAME
      sensitive: true
      environmentVariable: true
    - key: HW_ACCESS_KEY
      sensitive: true
      environmentVariable: true
    - key: HW_SECRET_KEY
      sensitive: true
      environmentVariable: true

Debug Output

Expected Behavior

Terraform Cloud Operator read the Terraform Cloud Workspace resource and is able to pull the Terraform module from Azure DevOps

Actual Behavior

Terraform Cloud Operator read the Terraform Cloud Workspace resource and is not able to pull the Terraform module from Azure DevOps, then got "Internal Error"

Steps to Reproduce

  1. Apply the Terraform Cloud Workspace resource as above
  2. Go to Terraform Cloud console, we won't get any workspaces
  3. If we see logs from Terraform Cloud Operator, we'll get "Internal Error".

Important Factoids

*** We use helm to install Terraform Cloud Operator by following steps as this link

References

Azure DevOps VCS Provider

Configure Operator to Deploy CRD

Currently, the CRD is manually copied to the terraform-helm chart to be deployed by Helm. This makes it difficult to maintain the correct CRD when the schema changes. There is a possibility that we can deploy the CRD with the operator.

Use ConfigMap for Workspace Variables

Besides directly hard-coding the value, support usage of ConfigMap valueFrom directive to reference the workspace variable value from ConfigMap.

  variables:
    - key: hello
       valueFrom:
         configMapKeyRef:
           name: workspace
           key: hello
      sensitive: false
      environmentVariable: false

Terraform appears to be out of date

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

terraform-k8s: 0.1.5-alpha
kubernetes: 1.15

Affected Resource(s)

Operator

Debug Output

"could not read state file, Error, state snapshot was created by Terraform v0.13.2, which is newer than current v0.12.25; upgrade to Terraform v0.13.2 or greater to work with this state"

Expected Behavior

The operator to be kept up to date with its state store.

Actual Behavior

Most times a new workspace will be created but not actually run and I am queuing them manually. It does run sometimes but it is not consistent. Attempting to update a workspace didn't appear to work at all but I did not actually check the variables in the TFC UI before deleting and starting over.

Steps to Reproduce

try to create a workspace

Important Factoids

Is this project alive? I wish I had the time to spend on it, it might be more encouraging if there was active development happening.

ConfigMap for Terraform Template

Currently, Terraform template is generated in-memory. Writing the template to a ConfigMap allows for debugging and re-application of module outside of the operator.

Support "sensitive" in OutputSpec

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Add the ability to mark an output as sensitive to prevent output in logs

Potential Terraform Configuration

  outputs:
  - key: aws_access_key_id
    moduleOutputName: aws_access_key_id
  - key: aws_secret_access_key
    moduleOutputName: aws_secret_access_key
    sensitive: true

References

Support for Terraform Enterprise

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Add support for on-premise Terraform Enterprise installations. Add ability to configure Address, Path, and HTTP client options.

Outputs need to support sensitive values in OutputSpec

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Currently, there does not appear to be a way to mark an output value as sensitive. When creating resources that generate their own sensitive values, the base Terraform module may set the output as sensitive, but in order to retrieve that value and have it stored in the workspace-outputs secret, it must be explicitly request as part of the workspace definition outputspec. The outputspec does not support a sensitive parameter, so when the HCL is rendered by the operator it can't mark the output value as sensitive, and so later versions (>0.14?) prevent such a plan from occurring.

This issue is different to #39, as it seems that the request there, while seemingly identical, is not solved by the provided fix of #80. Yes all outputs are now stored in a kubernetes secret, but we still can't access sensitive secrets.

Use case:

I want to create a simple AWS RDS instance, and have the password randomly generated and output so that my pods running in kubernetes can access the password for authentication. At present, I can output everything else that I need (endpoint, username etc.) except the password.

Recreating the issue

Given a simple RDS module that outputs a sensitive password value:

output "db_password" {
    value = aws_db_instance.main.password
    sensitive = true
}
resource "random_password" "password" {
  length           = 16
  special          = false
}
resource "aws_db_instance" "main" {
  allocated_storage    = "10"
  max_allocated_storage = "20"
  engine               = "postgres"
  engine_version       = "13"
  instance_class       = "db.t3.micro"
  name                 = "example"
  username             = "postgres"
  password             = random_password.password.result
  backup_retention_period = 5
  skip_final_snapshot  = true
  allow_major_version_upgrade = true
  auto_minor_version_upgrade  = true
  apply_immediately = true
}

This is then used within a workspace CRD in the following example

---
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: myOrg-psql
spec:
  organization: myOrg
  secretsMountPath: "/tmp/secrets"
  module:
    source: "[email protected]:myOrg/terraform-postgres-db.git"
  outputs:
    - key: db_password
      moduleOutputName: db_password
  variables:
    - key: AWS_DEFAULT_REGION
      value: eu-west-2
      sensitive: false
      environmentVariable: true
    - key: AWS_ACCESS_KEY_ID
      sensitive: true
      environmentVariable: true
    - key: AWS_SECRET_ACCESS_KEY
      sensitive: true
      environmentVariable: true

This is then rendered in to HCL again by the operator and submitted to the Terraform Cloud API

terraform {
    backend "remote" {
        organization = "myOrg"
        workspaces {
            name = "namespace-myOrg-psql"
        }
    }
}
output "db_password" {
    value = module.operator.db_password
}
module "operator" {
    source = "[email protected]:myOrg/terraform-postgres-db.git"
}

When this is submitted to Terraform Cloud as part of the plan however, the following error is encountered when using any later terraform version (>0.14?) due to the sensitivity of the module.operator.db_password output.

The error is as follows:

Error: Output refers to sensitive values
on main.tf line 16:
	output "db_password" {
To reduce the risk of accidentally exporting sensitive data that was intended to be only internal, Terraform requires that any root module output containing sensitive data be explicitly marked as sensitive, to confirm your intent. If you do intend to export this data, annotate the output value as sensitive by adding the following argument: sensitive = true

Running this terraform manually without the terraform operator produces the same error. The error is not with the way that the HCL is delivered to the Terraform Cloud workspace, but in the way that the HCL is rendered in the first place.

We can determine that the cause of the problem is due to the operators inability to render sensitive outputs correctly, because we can take this rendered terraform, update it, and submit it manually.

terraform {
    backend "remote" {
        organization = "myOrg"
        workspaces {
            name = "namespace-myOrg-psql"
        }
    }
}
output "db_password" {
    value = module.operator.db_password
    sensitive = true
}
module "operator" {
    source = "[email protected]:myOrg/terraform-postgres-db.git"
}

Adding the supported sensitive = true parameter directly to the terraform allows this to have a successful plan and apply operation(s) locally.

Solution

Ultimately I believe that we need to support a sensitive parameter within the CRD spec (https://github.com/hashicorp/terraform-helm/blob/master/crds/app.terraform.io_workspaces_crd.yaml#L58) and all the supporting code within the operator to render the HCL correctly.

References

I believe that these are the initial attempts at solving this problem
#39
#80

Workspace CRD
https://github.com/hashicorp/terraform-helm/blob/master/crds/app.terraform.io_workspaces_crd.yaml#L58

Import existing resources

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

We have created our own operators (e.g for AWS RDS) and have used them in production. Now we are investigating this operator and wondering how to "import" existing AWS resources with this operator (similar to the import command in Terraform CLI). The reason for it would be that we do not want to have two different tools/operators managing the same kind of resource (RDS databases in this case).

Potential Terraform Configuration

N/A

References

N/A

Workspace Name Prefixed With Namespace

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

By default the operator prefixes workspace names with the namespace of the CR. This can't be overridden and makes referencing workspace state harder (i.e. needing to know namespace). I am proposing making this a flag in the CR or simply turning it off by default and just using Name, letting a chart or whatever is generating the CR be responsible for naming the resource.

workspace := fmt.Sprintf("%s-%s", instance.Namespace, instance.Name)

I wanted to leave the decision up to those that use this currently, mostly because mistakenly changing the name of a workspace could be catastrophic. Option 1 could be the path of least resistance, based on the outcome here I am able to create a PR and another for the chart.

Introduce better logging

Following the guide here

Terraform workspaces created but no run configuration.

Exception:

2021-05-29T00:38:51.072Z	INFO	terraform-k8s	Checking outputs	{"Organization": "nslhub", "WorkspaceID": "ws-fkYap2hAd8ogTkxp", "RunID": ""}
2021-05-29T00:38:51.094Z	INFO	terraform-k8s	Updated outputs	{"Organization": "nslhub", "WorkspaceID": "ws-fkYap2hAd8ogTkxp"}
2021-05-29T00:38:51.094Z	INFO	terraform-k8s	Updating secrets	{"name": "greetings-outputs"}
2021-05-29T00:38:51.094Z	DEBUG	controller-runtime.manager.events	Normal	{"object": {"kind":"Workspace","namespace":"ci","name":"greetings","uid":"220cacfe-7f6c-4717-9258-ad02aec80e87","apiVersion":"app.terraform.io/v1alpha1","resourceVersion":"16909471"}, "reason": "WorkspaceEvent", "message": "Updated outputs for run "}
2021-05-29T00:38:51.564Z	INFO	terraform-k8s	Starting module backed run	{"Organization": "nslhub", "Name": "greetings", "Namespace": "ci"}
2021-05-29T00:38:51.775Z	ERROR	controller	Reconciler error	{"controller": "workspace-controller", "name": "greetings", "namespace": "ci", "error": "resource not found"}
github.com/go-logr/zapr.(*zapLogger).Error
	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:246
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90

This makes it very tough to understand the issue. Also observed

Status:
  Config Version ID:  
  Run ID:             
  Run Status:         
  Workspace ID:       ws-fkYap2hAd8ogTkxp

Support arm64 architecture

Description

I would like to propose supporting multi-arch images using docker buildx to be able to support users who are leveraging arm64 and other architecture device types.

Support VCS integration with TFC

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Add an option to support VCS integration with TFC, so that TFC can track changes in the git repo and automatically plan/apply them per the user's configuration.

Consider how to return the output to the Kubernetes configmap when the output changes from a run not initiated by the Operator.

Potential Terraform Configuration

spec:
  vcs-repo:
    identifier: "redeux/my-terraform-repo"
    oauth-token-id: "ot-hmAyP66qk2AMVdbJ"
    branch: ""
    default-branch: true

References

https://www.terraform.io/docs/cloud/run/ui.html#automatically-starting-runs
https://www.terraform.io/docs/cloud/api/workspaces.html#update-a-workspace

Unable to delete workspace when Apply fails

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

AWS EKS 1.14
Terraform 0.12.24
Operator: hashicorp/terraform-k8s:0.1.1-alpha

Affected Resource(s)

Workspace

Terraform Configuration Files

# Copy-paste your Terraform configuration from the operator here.
# To retrieve the configuration, use `kubectl -n $NAMESPACE describe configmap $WORKSPACE_NAME`
terraform {
           backend "remote" {
             organization = "parkside_securities"
         
             workspaces {
               name = "default-buckets"
             }
           }
         }
         variable "bucket" {}
         output "buckets" {
           value = module.operator.buckets
         }
         module "operator" {
           source = "[email protected]:rojopolis/terraform-aws-s3-bucket.git"
           bucket = var.bucket
         }

Debug Output

Expected Behavior

kubectl delete workspace/buckets

Actual Behavior

kubectl hangs (forever?)

sync-workspace logs:

{"level":"error","ts":1586469753.5856676,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"workspace-controller","request":"default/buckets","error":"destroy had error: <nil>","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/runner/go/pkg/mod/github.com/go-logr/zapr@v โ”‚
โ”‚ 0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runn โ”‚
โ”‚ er/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.Ji โ”‚
โ”‚ tterUntil.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.U โ”‚
โ”‚ ntil\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}

Steps to Reproduce

  1. Create a Workspace in K8s that can't be applied successfully.
  2. Try to delete the Workspace resource.

Important Factoids

My config was invalid because the module in use requires a list as an input variable #11 . It seems like the transition of failed->deleted should be valid.

References

#11

  • #0000

Client code-gen

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

We are building an integration that converts between the Open Service Broker API and CRDs. As it's written in golang, making use of native types for resource creation and for clients would be quite helpful.

The current Workspace type does not have the +clientgen annotation, which would allow starting to build integrations that make use of this CRD from native golang bindings.

Video demo Status Outputs Attribute and Key fields are incorrect

The output from the video demo shows

Outputs:
  Attribute: pet
  Key: my_pet
  Value: 

I believe the Attribute and Key fields are not correct as they are currently output as:

Outputs:
  Attribute: my_pet
  Key: 
  Value: adjusted-platypus

I suspect it should be

Outputs:
  Attribute: pet
  Key: my_pet
  Value: adjusted-platypus

Perhaps that's in the Rosemary's module code rather than the Operator.

Screenshot 2019-12-18 06 55 14

Override output name / namespace

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

It would be great if it were possible to change the generated outputs:

The motivation for this is that there may be cases where we want to reference terraform outputs from a different namespace (for example a service residing in namespace A may use terraform operator running in namespace B to create a database and then be able to pass the database host name to the pod via a configmap/secret reference). Since the existing functionality of this operator only creates outputs in the same namespace it is not possible to accomplish this:

You can write a Pod spec that refers to a ConfigMap and configures the container(s) in that Pod based on the data in the ConfigMap. The Pod and the ConfigMap must be in the same namespace.

Secret resources reside in a namespace. Secrets can only be referenced by Pods in that same namespace.

Potential Terraform Configuration

This functionality could be used like this:

apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: test-workspace
  namespace: terraform
spec:
  module:
    source: app.terraform.io/some_org/some_module/some_provider
    version: 1.0.0
  organization: some_org
  outputNamespace: my_namespace
  outputName: my_secret
  outputs:
  - key: some_output
    moduleOutputName: some_output
  secretsMountPath: /tmp/secrets
  variables:
  - key: some_key
    value: some_value
    sensitive: false
    environmentVariable: false

The above would create secret my_secret in the my_namespace namespace.

References

Upgrade Operator SDK to v1.x

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Upgrade Operator SDK to v1.x

References

https://sdk.operatorframework.io/

SSHKey fails to set on new workspaces.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

Affected Resource(s)

Terraform Configuration Files

# Copy-paste your Terraform configuration from the operator here.
# To retrieve the configuration, use `kubectl -n $NAMESPACE describe configmap $WORKSPACE_NAME`

Debug Output

Expected Behavior

The specified SSHKey should be applied to newly created workspaces.

Actual Behavior

The specified SSHKey is only applied after the second control loop which may cause plan/apply to fail when accessing private git repo.

Steps to Reproduce

Important Factoids

References

  • #0000

Doesn't support aws IRSA

As per the documentation, terraform aws providers support IRSA or IAM task roles. https://registry.terraform.io/providers/hashicorp/aws/latest/docs.

However while using the operator with greetings example, it seems it doesn't do that.

Although, the pod already have required environment variables, When we enforce this by setting the variables in the workspace yaml, it gives file not find exception , but the file exists in the pod.

Error: WebIdentityErr: failed fetching WebIdentity token: 
โ”‚ caused by: WebIdentityErr: unable to read file at /var/run/secrets/eks.amazonaws.com/serviceaccount/token
โ”‚ caused by: open /var/run/secrets/eks.amazonaws.com/serviceaccount/token: no such file or directory

Automatically manage creation / installation of agents for an organization in the cluster

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Now that the private agents are available for Terraform Cloud, it would be great if the controller could automatically deploy / link agents to your TFC organization.

I'm not sure if this is possible in the API yet.

Add example documentation

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

It would be useful to have more documentation or an example to walk users through creating a workspace using the operator.

Support Custom or Community Providers

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

It doesn't appear to be possible to use providers that aren't available in the Hashicorp registry. Using TF Cloud provider binaries can be included in the SCM repo under .terraform, however the operator dynamically creates a root module and I can't think of a way to insert a binary into the filesystem such that they are available to Terraform cloud at init time.

Potential Terraform Configuration

# Example using the confluent-cloud provider: https://github.com/Mongey/terraform-provider-confluentcloud
locals {
  environment_id = var.create_environment ? confluentcloud_environment.this.0.id : var.environment_id
}

resource "confluentcloud_environment" "this" {
  count = var.create_environment ? 1 : 0
  name  = var.environment
}

resource "confluentcloud_kafka_cluster" "this" {
  name             = var.cluster_name
  service_provider = var.service_provider
  region           = var.region
  availability     = var.availability
  environment_id   = local.environment_id
}

resource "confluentcloud_api_key" "this" {
  cluster_id     = confluentcloud_kafka_cluster.this.id
  environment_id = local.environment_id
}

References

Module version should be optional

Currently, the module field on the CRD requires a 'version', but this fails when pointing to a GitHub path. We can either make this an optional field or expect users to publish on the module registry

Support SSH key name in CR spec

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Issue #25 added support for SSH Key ID in the Workspace Custom Resource. This feature request would build upon that work to add support for SSH keys by name too. The reason for this is that SSH Key ID is not visible in TFC web console, and therefore requires users to query the API to fetch the information. The SSH key name is visible in the web console and therefore might be a more intuitive attribute to use when adding SSH keys to the CR.

References

#29

Add integration tests

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Add an initial set of integration tests to cover happy path scenarios for the operator. Set them up to run on our CI.

Potential Terraform Configuration

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

References

Support for state management and execution via Terraform OSS

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Syncing Kubernetes workspaces to Terraform Cloud provides a first-class Kubernetes interface for updating infrastructure managed by Terraform Cloud by re-executing updates to infrastructure configuration and Terraform Cloud non-sensitive variables. The functionality depends on Terraform Cloud to ensure consistent approaches to state locking, state storage, and execution.

If you would like to see the functionality of the operator include the execution and state management capabilities of open source Terraform, please document your use case below.

References

Documentation

Non-string variables

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Most (all?) of our modules have some variables that have a data type other than string (maps, lists, etc.). Right now, it appears the Terraform Operator only supports string type variables. It would be great if the operator had support for the different Terraform variable data types.

Potential Terraform Configuration

variables:
- key: tags
  value: <need to somehow supply a map here or via a configMapKeyRef>
  sensitive: false
  environmentVariable: false

Invalid Attribute error

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

operator: 0.1.1-alpha
k8s: Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.12-eks-eb1860", GitCommit:"eb1860579253bb5bf83a5a03eb0330307ae26d18", GitTreeState:"clean", BuildDate:"2019-12-23T08:58:45Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

Affected Resource(s)

workspace

Terraform Configuration Files

terraform {
  backend "remote" {
    organization = "workarea-commerce"
    workspaces
    {
      name = "egdod-staging-tf-egdod-staging-helios"
    }
  }
}
variable "cluster_name" {}
variable "recaptcha_api_key" {}
variable "recaptcha_secret" {}

module "operator" {
  source = "app.terraform.io/workarea-commerce/helios/aws"
  version = "1.0.6"
  cluster_name = var.cluster_name
  recaptcha_api_key = var.recaptcha_api_key
  recaptcha_secret = var.recaptcha_secret
}

Debug Output

https://gist.github.com/mcfearsome/21990e1580f3413f8eec954ad61c4f16

Expected Behavior

I expect the workspace to get planned/applied

Actual Behavior

Throwing Invalid Attribute error.

This run was only a version bump, with no changes to the inputs or their values.

Steps to Reproduce

This is happening intermittently, I do not have a concrete set of steps. Sometimes it will throw this and I will still see the plan/apply in TF cloud, others it just stops and never tries again.

It would be helpful if the error output included what attribute it is having trouble with at the very least. It also seems like the status on the workspace doesn't get properly updated when this happens as it is currently applied in my cluster.

Important Factoids

I have customized the helm chart to allow for workspace resources to be in any namespace

References

Panic in outputs from state

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

hashicorp/terraform-k8s:0.1.1-alpha

Affected Resource(s)

Terraform Configuration Files

terraform {
           backend "remote" {
             organization = "****"

             workspaces {
               name = "clusters-potassium"
             }
           }
         }
         variable "region" {}
         output "cluster_endpoint" {
           value = module.operator.cluster_endpoint
         }
         output "security_group_id" {
           value = module.operator.cluster_security_group_id
         }
         output "endpoint" {
           value = module.operator.cluster_endpoint
         }
         output "config_map_aws_auth" {
           value = module.operator.config_map_aws_auth
         }
         output "kubectl_config" {
           value = module.operator.kubectl_config
         }
         output "region" {
           value = module.operator.region
         }
         module "operator" {
           source = "terraform-aws-modules/eks/aws//examples/launch_templates"
           version = "11.0.0"
           region = var.region
         }

Debug Output

Logs: https://gist.github.com/goller/0e80675fd1f5b57d3e53c1fa49ce38e0

With this (edited) output state:

        "config_map_aws_auth": {
            "value": [
                {
                    "binary_data": null,
                    "data": {
                        "mapAccounts": "[]\n",
                        "mapRoles": "- \"groups\":\n  - \"system:bootstrappers\"\n  - \"system:nodes\"\n  \"rolearn\": \"arn:aws:iam::****:role/****\"\n  \"username\": \"system:node:{{EC2PrivateDNSName}}\"\n",
                        "mapUsers": "[]\n"
                    },

I get a panic here:

for it := val.ElementIterator(); it.Next(); {

panic(0x19030e0, 0x1f84160)
	/opt/hostedtoolcache/go/1.13.8/x64/src/runtime/panic.go:679 +0x1b2
github.com/zclconf/go-cty/cty.Value.ElementIterator(0x1ff4d80, 0xc00083ca20, 0x0, 0x0, 0x0, 0x0)
	/home/runner/go/pkg/mod/github.com/zclconf/[email protected]/cty/value_ops.go:1038 +0x101
github.com/hashicorp/terraform-k8s/operator/pkg/controller/workspace.convertValueToString(0x1ff4d80, 0xc00083ca20, 0x0, 0x0, 0xc0007ac953, 0xb)
	/home/runner/work/terraform-k8s/terraform-k8s/operator/pkg/controller/workspace/tfc_output.go:75 +0x8d8

I added this

diff --git a/operator/pkg/controller/workspace/tfc_output.go b/operator/pkg/controller/workspace/tfc_output.go
index 28da8ca..4eb33cc 100644
--- a/operator/pkg/controller/workspace/tfc_output.go
+++ b/operator/pkg/controller/workspace/tfc_output.go
@@ -23,6 +23,9 @@ func (t *TerraformCloudClient) GetStateVersionDownloadURL(workspaceID string) (s
 }

 func convertValueToString(val cty.Value) string {
+       if val.IsNull() {
+               return ""
+       }
        ty := val.Type()
        switch {
        case ty.IsPrimitiveType():

However, another panic will follow here:

as a result of "mapAccounts": "[]\n", not being a string.

I tried this diff

diff --git a/operator/pkg/controller/workspace/tfc_output.go b/operator/pkg/controller/workspace/tfc_output.go
index 28da8ca..4eb33cc 100644
--- a/operator/pkg/controller/workspace/tfc_output.go
+++ b/operator/pkg/controller/workspace/tfc_output.go
@@ -36,9 +39,10 @@ func convertValueToString(val cty.Value) string {
                                // and just allow it to be printed out directly
                                if err == nil && !ty.IsPrimitiveType() && strings.TrimSpace(val.AsString()) != "null" {
                                        jv, err := ctyjson.Unmarshal(src, ty)
-                                       if err == nil {
-                                               return jv.AsString()
+                                       if err != nil {
+                                               return ""
                                        }
+                                       return convertValueToString(jv)
                                }
                        }
                        return `"` + val.AsString() + `"`

This solution is likely not correct, but, it stops the panic.

Expected Behavior

Needs to handle string empty JSON arrays.

I'm honestly not sure how the operator should represent the output of these embedded strings.

Actual Behavior

Panic and unable to use operator

Steps to Reproduce

kubectl apply this:

---
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: myname
spec:
  organization: ****
  secretsMountPath: "/tmp/secrets"
  module:
    source: "terraform-aws-modules/eks/aws//examples/launch_templates"
    version: "11.0.0"
  outputs:
    - key: cluster_endpoint
      moduleOutputName: cluster_endpoint
    - key: security_group_id
      moduleOutputName: cluster_security_group_id
    - key: endpoint
      moduleOutputName: cluster_endpoint
    - key: config_map_aws_auth
      moduleOutputName: config_map_aws_auth
    - key: kubectl_config
      moduleOutputName: kubectl_config
    - key: region
      moduleOutputName: region
  variables:
    - key: region
      value: us-east-1
      sensitive: false
      environmentVariable: false
    - key: AWS_DEFAULT_REGION
      valueFrom:
        configMapKeyRef:
          name: aws-configuration
          key: region
      sensitive: false
      environmentVariable: true
    - key: AWS_ACCESS_KEY_ID
      sensitive: true
      environmentVariable: true
    - key: AWS_SECRET_ACCESS_KEY
      sensitive: true
      environmentVariable: true
    - key: CONFIRM_DESTROY
      value: "1"
      sensitive: false
      environmentVariable: true

Important Factoids

References

Similar problems for other users of cty:

hashicorp/terraform#23492
#11
#12

  • #0000

Test Issue

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

Affected Resource(s)

Terraform Configuration Files

# Copy-paste your Terraform configuration from the operator here.
# To retrieve the configuration, use `kubectl -n $NAMESPACE describe configmap $WORKSPACE_NAME`

Debug Output

Expected Behavior

Actual Behavior

Steps to Reproduce

Important Factoids

References

  • #0000

Add operator-sdk code generation to Makefile

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Currently our contributors have to remember to edit the CRDs manually when submitting a change, or run the operator-sdk generate crds command from the ./operator directory (which is only supported in newer versions of operator-sdk).

Ideally, I would like to add the code generation into the Makefile so it's automatically run when a contributor is doing local testing. That way the CRD change is automatically included in the PR.

Otherwise we rely on the contributor remembering to manually edit the CRD, and the reviewer testing code generation manually to make sure no breaking changes were introduced in the current (or prior) PR.

References

These are examples of times when code generation would have been useful to have.

Pin Terraform Version in Workspace

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s & Kubernetes Version

v0.1.3-alpha

Terraform Configuration Files

N/A

Debug Output

Pin Terraform version for workspace. By default, the API creates the workspace with the latest Terraform version. This causes the operator to fail if it isn't compiled with the latest version of Terraform.

Configurable Workspace AutoApply

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Currently, the Workspace is hardcoded to be set to AutoApply: true (see here and here).

I would like to configure this boolean via a root level field on the spec, autoApply:

apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
  name: hello
spec:
  autoApply: false
  ...

It appears this change may be trivial, as instance is passed to CheckWorkspace here and so instance.Spec.AutoApply could be passed to CreateWorkspace here.

However I am unsure if the controller already handles patching an existing workspace, it looks like here some extra checks are done where based on the spec, the TF cloud workspace resource is further modified, suggesting this feature is not as trivial as at first glance. To support configurable AutoApply it looks like the controller will need to account for spec.autoApply changing and keeping it in sync with the TF cloud resource. Should this functionality be added to the CheckWorkspace function?

Is there a design decision why this feature is not currently supported, perhaps for the functionality of the reconciliation loop? If a workspace is set to autoApply false, will the controller continue to work and just be stuck in a pending status for longer?

Terraform Operator can't connect to a TFE instance, signed by a private CA

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

terraform-k8s, Helm, & Kubernetes Version

terraform-k8s: hashicorp/terraform-k8s:0.1.5-alpha

helm version:

version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}

Kubernetes Version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-06-03T04:00:21Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Affected Resource(s)

ALL

Helm Values

# Available parameters and their default values for the Terraform chart.

# global holds values that affect multiple components of the chart.
global:
  # enabled is the master enabled/disabled setting.
  # If true, all Terraform Kubernetes integrations will be enabled.
  # If false, no components will be installed by default and per-component
  # opt-in is required, such as by setting `syncWorkspace.enabled` to true.
  enabled: true

  # imageK8S is the name (and tag) of the terraform-k8s Docker image that
  # is used for functionality such as workspace sync. This can be overridden
  # per component.
  imageK8S: "hashicorp/terraform-k8s:0.1.5-alpha"

# syncWorkspace will run the workspace sync process to sync K8S Workspaces with
# Terraform Cloud workspaces.
#
# This process will deploy a Workspace Custom Resource Definition and
# Terraform Cloud Workspace Operator to the Kubernetes cluster. It requires
# a Kubernetes secret for Terraform Cloud access and sensitive variables.
syncWorkspace:
  # True if you want to enable the workspace sync. Set to "-" to inherit from
  # global.enabled.
  enabled: "-"
  image: null

  # k8WatchNamespace is the Kubernetes namespace to watch for workspace
  # changes and sync to Terraform Cloud. If this is not set then it will default
  # to the release namespace
  k8WatchNamespace: null

  # terraformVersion describes the version of Terraform to use for each workspace.
  # If this is not set then it will default to the latest version of Terraform
  # compiled with the operator.
  terraformVersion: latest
  # tfeAddress denotes the address in the form of https://tfe.local for
  # a Terraform Enterprise instance. If this is not set then it will default
  # to Terraform Cloud (https://app.terraform.io)
  tfeAddress: https://mytfeinstance.com

  # Log verbosity level. One of "trace", "debug", "info", "warn", or "error".
  # Defaults to "info".
  logLevel: null

  # Name and key of Kubernetes secret containing the Terraform CLI Configuration
  # Must have Terraform Cloud Team API Token 
  terraformRC:
    secretName: terraformrc
    secretKey: credentials

  # Name of Kubernetes secret containing keys and values of sensitive variables 
  sensitiveVariables:
    secretName: workspacesecrets

# Control whether a test Pod manifest is generated when running helm template.
# When using helm install, the test Pod is not submitted to the cluster so this
# is only useful when running helm template.
tests:
  enabled: true
  organization: tf-operator
  # moduleSource defaults to this repository.
  # moduleSource: git::https://github.com/hashicorp/terraform-helm.git//test/module

Debug Output

https://gist.github.com/CodyKurtz/d33d0f5db20df82bc209c90c893378bf

Expected Behavior

There should be a way to mount a private public CA certificate, so the Terraform Operator can accept a TLS connection to a Terraform Enterprise instance, signed by a private CA.

Actual Behavior

TF Operator cannot connect to TFE.

Steps to Reproduce

Deploy TF Operator with a TFE instance, that has a cert signed by a private CA.

Important Factoids

None

References

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.