Giter Club home page Giter Club logo

terragrunt-infrastructure-live-example's Introduction

Maintained by Gruntwork.io

Example infrastructure-live for Terragrunt

This repo, along with the terragrunt-infrastructure-modules-example repo, show an example file/folder structure you can use with Terragrunt to keep your Terraform and OpenTofu code DRY. For background information, check out the Keep your code DRY section of the Terragrunt documentation.

This repo shows an example of how to use the modules from the terragrunt-infrastructure-modules-example repo to deploy an Auto Scaling Group (ASG) and a MySQL DB across three environments (qa, stage, prod) and two AWS accounts (non-prod, prod), all with minimal duplication of code. That's because there is just a single copy of the code, defined in the terragrunt-infrastructure-modules-example repo, and in this repo, we solely define terragrunt.hcl files that reference that code (at a specific version, too!) and fill in variables specific to each environment.

Be sure to read through the Terragrunt documentation on DRY Architectures to understand the features of Terragrunt used in this folder organization.

Note: This code is solely for demonstration purposes. This is not production-ready code, so use at your own risk. If you are interested in battle-tested, production-ready Terraform code, check out Gruntwork.

How do you deploy the infrastructure in this repo?

Pre-requisites

  1. Install OpenTofu version 1.6.0 or newer and Terragrunt version v0.52.0 or newer.
  2. Update the bucket parameter in the root terragrunt.hcl. We use S3 as a Terraform backend to store your state, and S3 bucket names must be globally unique. The name currently in the file is already taken, so you'll have to specify your own. Alternatives, you can set the environment variable TG_BUCKET_PREFIX to set a custom prefix.
  3. Update the account_name and aws_account_id parameters in non-prod/account.hcl and prod/account.hcl with the names and IDs of accounts you want to use for non production and production workloads, respectively.
  4. Configure your AWS credentials using one of the supported authentication mechanisms.

Deploying a single module

  1. cd into the module's folder (e.g. cd non-prod/us-east-1/qa/mysql).
  2. Note: if you're deploying the MySQL DB, you'll need to configure your DB password as an environment variable: export TF_VAR_master_password=(...).
  3. Run terragrunt plan to see the changes you're about to apply.
  4. If the plan looks good, run terragrunt apply.

Deploying all modules in a region

  1. cd into the region folder (e.g. cd non-prod/us-east-1).
  2. Configure the password for the MySQL DB as an environment variable: export TF_VAR_master_password=(...).
  3. Run terragrunt run-all plan to see all the changes you're about to apply.
  4. If the plan looks good, run terragrunt run-all apply.

Testing the infrastructure after it's deployed

After each module is finished deploying, it will write a bunch of outputs to the screen. For example, the ASG will output something like the following:

Outputs:

asg_name = tf-asg-00343cdb2415e9d5f20cda6620
asg_security_group_id = sg-d27df1a3
elb_dns_name = webserver-example-prod-1234567890.us-east-1.elb.amazonaws.com
elb_security_group_id = sg-fe62ee8f
url = http://webserver-example-prod-1234567890.us-east-1.elb.amazonaws.com:80

A minute or two after the deployment finishes, and the servers in the ASG have passed their health checks, you should be able to test the url output in your browser or with curl:

curl http://webserver-example-prod-1234567890.us-east-1.elb.amazonaws.com:80

Hello, World

Similarly, the MySQL module produces outputs that will look something like this:

Outputs:

arn = arn:aws:rds:us-east-1:1234567890:db:tofu-00d7a11c1e02cf617f80bbe301
db_name = mysql_prod
endpoint = tofu-1234567890.abcdefghijklmonp.us-east-1.rds.amazonaws.com:3306

You can use the endpoint and db_name outputs with any MySQL client:

mysql --host=tofu-1234567890.abcdefghijklmonp.us-east-1.rds.amazonaws.com:3306 --user=admin --password mysql_prod

How is the code in this repo organized?

The code in this repo uses the following folder hierarchy:

account
 └ _global
 └ region
    └ _global
    └ environment
       └ resource

Where:

  • Account: At the top level are each of your AWS accounts, such as stage-account, prod-account, mgmt-account, etc. If you have everything deployed in a single AWS account, there will just be a single folder at the root (e.g. main-account).

  • Region: Within each account, there will be one or more AWS regions, such as us-east-1, eu-west-1, and ap-southeast-2, where you've deployed resources. There may also be a _global folder that defines resources that are available across all the AWS regions in this account, such as IAM users, Route 53 hosted zones, and CloudTrail.

  • Environment: Within each region, there will be one or more "environments", such as qa, stage, etc. Typically, an environment will correspond to a single AWS Virtual Private Cloud (VPC), which isolates that environment from everything else in that AWS account. There may also be a _global folder that defines resources that are available across all the environments in this AWS region, such as Route 53 A records, SNS topics, and ECR repos.

  • Resource: Within each environment, you deploy all the resources for that environment, such as EC2 Instances, Auto Scaling Groups, ECS Clusters, Databases, Load Balancers, and so on. Note that the code for most of these resources lives in the terragrunt-infrastructure-modules-example repo.

Creating and using root (account) level variables

In the situation where you have multiple AWS accounts or regions, you often have to pass common variables down to each of your modules. Rather than copy/pasting the same variables into each terragrunt.hcl file, in every region, and in every environment, you can inherit them from the inputs defined in the root terragrunt.hcl file.

terragrunt-infrastructure-live-example's People

Contributors

avnerenv0 avatar brikis98 avatar denis256 avatar infraredgirl avatar jalehman avatar massettim avatar rhoboat avatar robmorgan avatar taylormonacelli avatar yorinasub17 avatar yujunz avatar zackproser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terragrunt-infrastructure-live-example's Issues

How to keep variables declarations DRY

First of all, I would like to thank you for this tool.

Problem

I am having a problem trying to keep the variables.tf file DRY. Here's my directories structure:

.
├── common.shared.yaml
└── iam
│   ├── role-01
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── terragrunt.hcl
│   │   └── variables.tf
│   └── role-02
│       ├── main.tf
│       ├── outputs.tf
│       ├── terragrunt.hcl
│       └── variables.tf
└ terragrunt.hcl

In my root terragrunt.hcl:

...

locals {
  common_vars_yaml = "common.shared.yaml"
}

inputs = merge(
  yamldecode(
    file(
      find_in_parent_folders(local.common_vars_yaml)
    )
  )
)

...

Even though I import these inputs in every terragrunt.hcl for each service, I still need to repeat all variables declarations.

Example

# common.shared.yaml
obj:
  str: value
# variables.tf
variable "obj" {
  type = object({
    str = string
  })
}

The last part above (variables.tf) is copied and pasted for every service that needs variables present in common.shared.yaml.

How could I solve it? Are there any patterns or built-in functions?

How does CI\CD fit into all of this?

I think this is still an open problem today.

Given that modules are stored in another repo. Integrating all of this with Jenkins would be like:

  1. new branch on modules repo
  2. commit and submit PR
  3. Jenkins triggers and comments the output of terraform plan on the PR
  4. PR gets merged

We can suppose that on 3. terragrunt plan is run under /non-prod/us-east-1/stage because that's where we deploy stuff under development. So we know the environment but not the module, is it mysql or webserver-cluster?

Furthermore, if we want to deploy stuff in /non-prod/us-east-1/qa how are we going to do that via Jenkins? Should I have like a separate Job just to run the plan/apply command? The same issue arises, how doeas Jenkins know what module to plan/apply?

VPC for multi-regions?

I understand keeping the examples simple, but I'm curious as to how to handle VPCs for each region? Is there an example of how I would be able to deploy VPCs for two regions within the same account?

--terragrunt-source-map seems to be ignored

Describe the bug
I don't know how to use the --terragrunt-source-map param, it seems like it's completely ignored for me.

To Reproduce
We are using private repositories hosted in azure devops. The code to include another repo looks like this:

module "core_api_specification" {
  source = "git::[email protected]:v3/<organization>/<project>/<repo>//<subpath>?ref=feature/random-feature"
  ... 
}

So far so good. I'm able to get the code with terragrunt get can plan and apply. But when I try to overwrite this source path with --terragrunt-source-map git::[email protected]:v3/<organization>/<project>/<repo>=/abs/path/to/local/repo//<subpath> it is completly ignored. I tried basically all premutations how you would be able to change the url, but the output from --terragrunt-log-level debug never contains anything from the second part (that should be used instead of the url).

Expected behavior
I expect it to actually use the local path. Could you help me understand what I'm doing wrong?

info
terragrunt version v0.38.4
Terraform v1.2.4 on darwin_arm64

Deploy single module tries to destroy other modules

I am having strange behavior while migrating this example to google cloud provider I have following structure:
infra.zip

no in modules there is folder cloud-run/save-image-fnc, when I run there terragrunt plan and apply to only deploy that module, Its trying to delete the storage-bucket module as well... hm any idea?

Can't use aws profile without setting AWS_PROFILE environment variable

Hi,
Since in account.hcl I use the aws_profile varible, I would like that when running terragrunt that profile will be used without the need to set an AWS_PROFILE environment variable, so I added it to the root terragrunt.hcl file in the backend s3 remote state but it doesn't work without setting the AWS_PROFILE environment variable.
Since that aws_profile variable is specified I am guessing this should be a working feature in the project, I appreciate the project and the help with this issue, thank you.

This is the error I get:

Initializing modules...

Initializing the backend...

Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.

Please see https://www.terraform.io/docs/backends/types/s3.html
for more information about providing credentials.

Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors

[terragrunt] 2020/10/23 23:49:20 Hit multiple errors:
exit status 1

Attached my code:

locals {
  account_vars = read_terragrunt_config(find_in_parent_folders("account.hcl"))
  region_vars = read_terragrunt_config(find_in_parent_folders("region.hcl"))
  environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))
  account_name = local.account_vars.locals.account_name
  account_id   = local.account_vars.locals.aws_account_id
  aws_profile  = local.account_vars.locals.aws_profile
  aws_region   = local.region_vars.locals.aws_region
}

generate "provider" {
  path      = "provider.tf"
  if_exists = "overwrite_terragrunt"
  contents  = <<EOF
provider "aws" {
  region = "${local.aws_region}"
  profile = "${local.aws_profile}"
  allowed_account_ids = ["${local.account_id}"]
}
EOF
}

remote_state {
  backend = "s3"
  config = {
    profile        = local.aws_profile
    encrypt        = true
    bucket         = "${get_env("TG_BUCKET_PREFIX", "")}blablabla777-${local.account_name}-${local.aws_region}"
    key            = "${path_relative_to_include()}/terraform.tfstate"
    region         = local.aws_region
    dynamodb_table = "terraform-locks"
  }
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite_terragrunt"
  }
}
inputs = merge(
  local.account_vars.locals,
  local.region_vars.locals,
  local.environment_vars.locals,
)

Fix or remove CircleCI build

Currently we've got a Github check running the project in CircleCI, but we don't actually have it configured. As part of this issue, we should:

  • make a decision if we want a CircleCI pipeline to run - e.g. some terraform validate steps, or some other relevant steps
    if yes, then add a configuration file & make sure it passes
  • if no, then disable this check for the project from CircleCI here
    Why?
    Currently, some PRs will appear with a broken build because of the missing CircleCI configuration. This is not a big problem, so this task is more a housekeeping type of job.

[QUESTION] Multi region deployment

I'm using Terragrunt and it is amazing. This repo was very useful indeed when setting up my own.

That being said, I got confused by one particular design choice. Considering the tree below:

prod/
├── terragrunt.hcl
└── us-east-1/
    └── prod/
        ├── mysql/
        │   └── terragrunt.hcl
        └── webserver-cluster/
            └── terragrunt.hcl

And the fact the region is defined one folder level above us-east-1

inputs = {
  aws_region = "us-east-1"
}

I started wondering what would happen if two regions were to be used and how to go about duplicating the structure while maintaining the DRY aspect of Terragrunt.

The first thing I wondered was if I could add a 3-line terragrunt.hcl under prod/us-east-1 that would only define inputs.aws_region, but given the fact find_in_parent_folders() gets the very first file it finds, that would never work...

From my point of view, the current way to duplicate a region would just to duplicate the first level prod folder all the way down to every module, but nevertheless I do think having multiple aws_region variables under different prod/some-region folders don't solve the whole problem either...

I guess the question per se is how to go about a multi-region deployment. Since I fail to see a good enough solution, how does Terragrunt solves it? :)

Must repeat local blocks in terragrunt.hcl

Thank you for Terragrunt. It is a fine tool.

It seems to me that local variable definitions are not so DRY. For example, in the sample presented here, the reading of env.hcl and the extraction of variable env is repeated in prod/us-east-1/prod/mysql/terragrunt.hcl and non-prod/us-east-1/qa/webserver-cluster/terragrunt.hcl:

locals {
  # Automatically load environment-level variables
  environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))

  # Extract out common variables for reuse
  env = local.environment_vars.locals.environment
}

Is there no way to put this in the root terragrunt.hcl so that all configurations can reference a single definition of local.env?

Static lock table

I noticed that lock_table is hard coded. Why not have it based on the environment (prd, dev, qa) so that you can work on these in a parallel fashion?

Dependencies among common modules

Describe the solution you'd like
I would like to have the possibility to reference two modules in _envcommon folder through dependenc[y|ies] block.

Describe alternatives you've considered
I have considered having such dependencies on terraform module level however, this takes away the possibility to use standardized modules

Additional context
When I try to add a dependency to _envcommon like:
_envcommon/dependencies/vpc/terragrunt.hcl
And in my _envcommon/alb.hcl I state a depenency like:

dependency "vpc" {
  config_path = "${dirname(find_in_parent_folders())}/dependencies/vpc/"
}

It resolves to: /work/_envcommon/dependencies/vpc/terragrunt.hcl but fails to load env.hcl:
Call to function "find_in_parent_folders" failed: ParentFileNotFound: Could not find a env.hcl in any of the parent folders of /work/_envcommon/dependencies/vpc/terragrunt.hcl

Terraform Vars Cascade Processing

Does it make sense to process terraform.tfvars files in all parent directories? Consider the next example:

prod
  us-east-1
    prod
      mysql
        terraform.tfvars # 1 (particular module level variables)
  terraform.tfvars # 2 (region level variable)
terraform.tfvars # 3 (account level variables)

Currently, there is only find_in_parent_folders(), which picks the first found terraform.tfvars file in the parent directories. In the example above we can merge only 1 and 2. It would be nice to adjust the function or create a new one (to not break backward compatibility), which would merge 1, 2, and 3.

People achieve described behavior with the hooks, but I think that should be in the "core".

Need to add one dir layer more but gets error: Only one level of includes is allowed

So I extended your structure bit more to be able to construct more complex scenarios:

terragrunt.hcl
non-prod
   |---ap-northeast-2
      |---account.hcl
      region
         |---region.hcl
         env
            |---env.hcl
            |--- aws_dms
                |---terragrunt.hcl
                |---aws_kms
                    |---terragrunt.hcl
                |---aws_iam_role
                    |---terragrunt.hcl

every hcl file includes only for base testing following block:
include "root" {
path = find_in_parent_folders()
}

but when run terragrunt run-all plan from aws_dms folder I get error:

[09:18:00] zangetsu@zeus  $       /data/proj/kidsloop/terragrunt-dms-demo/non-prod/ap-northeast-2/dev/dms   main  tg run-all plan
ERRO[0000] Error processing module at '/data/proj/kidsloop/terragrunt-dms-demo/non-prod/ap-northeast-2/dev/dms/kms/terragrunt.hcl'. How this module was found: Terragrunt config file found in a subdirectory of /data/proj/kidsloop/terragrunt-dms-demo/non-prod/ap-northeast-2/dev/dms. Underlying error: /data/proj/kidsloop/terragrunt-dms-demo/non-prod/ap-northeast-2/dev/dms/kms/terragrunt.hcl includes /data/proj/kidsloop/terragrunt-dms-demo/non-prod/ap-northeast-2/dev/dms/terragrunt.hcl, which itself includes /data/proj/kidsloop/terragrunt-dms-demo/non-prod/ap-northeast-2/dev/dms/terragrunt.hcl. Only one level of includes is allowed. 
ERRO[0000] Unable to determine underlying exit code, so Terragrunt will exit with error code 1 

Error: Terraform initialized in an empty directory

I am trying to run an example that is mostly a copy of this repo. My repo is local so I am using the argument on terragrunt plan as --terragrunt-source /Users/me/Documents/other/projectx/projectx-terraform-modules. I keep getting issues as below. I have shown the directory structure and I have also gone through the temporary output.

Terraform initialized in an empty directory!

The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
[terragrunt] 2018/01/21 21:53:00 Running command: terraform plan

Error: No configuration files found!

My live repo

└── non-prod
    ├── account.tfvars
    ├── terraform.tfvars
    └── us-east-1
        └── stage
            └── postgresql
                └── terraform.tfvars

My modules repo

├── postgresql
│   ├── main.tf
│   ├── outputs.tf
│   └── vars.tf
└── vpc
    ├── main.tf
    ├── outputs.tf
    └── vars.tf

temp directory created by terra grunt

└── iASxrJUmUlXYPnHVRN4NFMJU39g
    ├── account.tfvars
    ├── postgresql
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── vars.tf
    ├── terraform.tfvars
    ├── us-east-1
    │   └── stage
    │       ├── postgresql
    │       │   └── terraform.tfvars
    │       └── postgresql.tfvars
    └── vpc
        ├── main.tf
        ├── outputs.tf
        └── vars.tf

Add structure for additional cloud providers.

Hi

I like to see what the recommended folder structure would look like, if additional cloud providers were to be used. Some environment parameters could be consistent across clouds.

I think some scenarios may be difficult to represent. For example, maybe I have an EKS cluster in prod on AWS and a GKE cluster in prod on GCP. But perhaps, I wish to use terraform to deploy the same Helm Chart or K8s resources to these clusters. How could I represent that best, where the helm values could be shared across clouds but also overridden for specific values.

Thank you

Using Output From Module A as Input into Module B

I have tried this pattern but seem to encounter an issue trying to use an output from a module as an input for another module in terraform.tfvars.

For example in the db terraform.tfvars, using this:

db_subnet_group_name		  = "${module.vpc.db_subnet_group_name}"

throws the error:
Underlying error: Invalid interpolation syntax. Expected syntax of the form '${function_name()}', but got '${module.vpc.db_subnet_group_name}'

Terragrunt Validate-All fails

Thank you for all the effort put into this tool and for making it open source.

We are new to terraform/terragrunt and find that terragrunt validate-all fails.

$ terragrunt --version
terragrunt version v0.25.1

Is this expected?
If so is there a workaround?

$ git clone https://github.com/gruntwork-io/terragrunt-infrastructure-live-example.git

<elided>
$ cd terragrunt-infrastructure-live-example
$ terragrunt validate-all                                                      [15/234]
Error: Error in function call                                                                                       
  on /home/<redacted>/src/terragrunt-infrastructure-live-example/terragrunt.hcl line 15, in locals
:                    
  15:   environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))
                      
Call to function "find_in_parent_folders" failed: ParentFileNotFound: Could not find a
env.hcl in any of the parent folders of
/home/<redacted>/src/terragrunt-infrastructure-live-example/terragrunt.hcl. Cause: Traversed all
the way to the root..
                                                               
[terragrunt] [/home/<redacted>/src/terragrunt-infrastructure-live-example] 2020/10/12 20:16:59 Enc
ountered error while evaluating locals.                      
[terragrunt] 2020/10/12 20:16:59 Error processing module at '/home/<redacted>/src/terragrunt-infra
structure-live-example/terragrunt.hcl'. How this module was found: Terragrunt config file fou
nd in a subdirectory of /home/<redacted>/src/terragrunt-infrastructure-live-example. Underlying er
ror: /home/<redacted>/src/terragrunt-infrastructure-live-example/terragrunt.hcl:15,45-68: Error in
 function call; Call to function "find_in_parent_folders" failed: ParentFileNotFound: Could n
ot find a env.hcl in any of the parent folders of /home/<redacted>/src/terragrunt-infrastructure-l
ive-example/terragrunt.hcl. Cause: Traversed all the way to the root..

etc.

Need to update readme and bucket pre-requisites

Hello TerraGrunts,

Looks like README.md needs to be updated to reflect latest commit with merging and moving terragrung.tcl to the root directory.
README.md
Lines 31 -34

Update the bucket parameter in non-prod/terragrunt.hcl and prod/terragrunt.hcl to unique names. We use S3 as a Terraform backend to store your Terraform state, and S3 bucket names must be globally unique. The names currently in the file are already taken, so you'll have to specify your own. Alternatives, you can set the environment variable TG_BUCKET_PREFIX to set a custom prefix.

Thank you!
darpham

Why need prod under us-east-1?

├── prod
│   ├── account.hcl
│   └── us-east-1
│       ├── prod
│       │   ├── env.hcl
│       │   ├── mysql
│       │   │   └── terragrunt.hcl
│       │   └── webserver-cluster
│       │       └── terragrunt.hcl
│       └── region.hcl

Why we need prod under us-east-1? (us-east-1/prod)

What is the use-case for?

I can image that we can have services in prod in multiple regions but why separate under region what is in prod or not?

[QUESTION] Dependency in _global folder

Hi,

I´m working on implementing this repository structure for my own project.
The documentation states that there can be a _global folder for resources like IAM.

My question would be, how can I implement a dependency of a resource in the _global folder to a resource in an environment folder

My terragrunt.hcl in the environment folder has the following config

.
.
.
include {
  path = find_in_parent_folders()
}

# Set Module dependencies
dependencies {
  paths =  [
    "../../../_global/tf-iam"
    ]
}

When running the plan-all command in the environment folder i get the following error message.

Module /home/ec2-user/aws-account-name/eu-central-1/dev/tf-ec2 ../../../_global/tf-iam as a dependency, but that dependency was not one of the ones found while scanning subfolders:

Any help would be appreciated.
Thanks!

Not able to retrieve a local value from parent terragrunt.hcl file

Describe the bug
Hello, I'm using a custom module of S3 where I can create multiple buckets with different features of it in a single terragrunt file. Our setup is as following:

--- terragrunt.hcl (root terragrunt file, where we have defined the providers, remote states)
--- common.hcl (common variables)
----- environments/
---------- aws01/
------------- account.hcl (where we define the account name variables in locals)
------------- s3/
-----------------terragrunt.hcl (child terragrunt.hcl file)
----------------- buckets.hcl

And the problem that I have is that I can not retrieve a local variable in buckets.hcl that is defined in child terragrunt.hcl file.

child terragrunt.hcl file contains:

# ------------------------------------------------------------------------------------------
# Set source terraform module
# ------------------------------------------------------------------------------------------
terraform {
  source = "XXXXXXXXXXXXXXXXXXXX"
}
# ------------------------------------------------------------------------------------------
# Run Default terragrunt (create providers and define remote state)
# ------------------------------------------------------------------------------------------
include {
  path = find_in_parent_folders()
}
# ------------------------------------------------------------------------------------------
# Load common values
# ------------------------------------------------------------------------------------------
locals {

  account_details = read_terragrunt_config(find_in_parent_folders("account.hcl"))
  common_vars     = read_terragrunt_config(find_in_parent_folders("common.hcl"))

  account_name = local.account_details.locals.account_name
  account_id   = local.common_vars.locals.accounts[local.account_name]

  s3_buckets_raw = read_terragrunt_config("./buckets.hcl")
  s3_buckets     = local.s3_buckets_raw.locals.s3_buckets
}
# ------------------------------------------------------------------------------------------
# Specify module input variables
# ------------------------------------------------------------------------------------------
inputs = {
  s3_buckets = local.s3_buckets
}

And buckets.hcl file:

locals {
  common_vars     = read_terragrunt_config("./terragrunt.hcl")
  account_id   = local.common_vars.locals.account_id

  s3_buckets = {
    XXXXXXXXXXXX = {
      name                   = "XXXXXXXXX"
      force_destroy          = true
      tags = {}
      versioning_status      = "Disabled"
      public_access_block    = {
        block_public_acls       = true
        block_public_policy     = true
        ignore_public_acls      = true
        restrict_public_buckets = true
      }
      ownership_controls     = {
        object_ownership = "BucketOwnerPreferred"
      }
      lifecycle_rules        = []
      lambda_details         = {}
      bucket_policy = {
        statements = [{
          sid       = "AllowInspector"
          effect    = "Allow"
          actions   = [
            "s3:PutObject", 
            "s3:PutObjectAcl", 
            "s3:AbortMultipartUpload"
          ]
          resources = [
            "arn:aws:s3:::XXXXXXXXXXXXX/*"
          ]
          principals = [
            {
              type        = "Service"
              identifiers = ["inspector2.amazonaws.com"]
            }
          ]
          condition = [
            {
              test    = "StringEquals"
              variable = "aws:SourceAccount"
              values = [
                (local.account_id)
              ]
            },
          ]
        }]
      }
    },
  }
}

I'm trying to retrieve the local.account_id but when i run terragrunt plan it's crashes

Running with following OS:

$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
SUPPORT_END="2025-06-30"

Expected behavior
I would expect terragrunt able to retrieve the value in bucket.hcl file for my local account_id, but having this error:

runtime: sp=0xc10dc00f48 stack=[0xc10dc00000, 0xc12dc00000]
fatal error: stack overflow

runtime stack:
runtime.throw({0x16065de?, 0x23091c0?})
	/usr/local/go/src/runtime/panic.go:1047 +0x5d fp=0xc000691e18 sp=0xc000691de8 pc=0x435f3d
runtime.newstack()
	/usr/local/go/src/runtime/stack.go:1105 +0x5bd fp=0xc000691fc8 sp=0xc000691e18 pc=0x44fdbd
runtime.morestack()
	/usr/local/go/src/runtime/asm_amd64.s:574 +0x8b fp=0xc000691fd0 sp=0xc000691fc8 pc=0x46700b

goroutine 1 [running]:
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseExpressionTerm(0xc10dc2adf8)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:944 +0x2429 fp=0xc10dc00f58 sp=0xc10dc00f50 pc=0x694889
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseExpressionWithTraversals(0x0?)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:619 +0x25 fp=0xc10dc00fd8 sp=0xc10dc00f58 pc=0x68d125
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0x0?, {0xc000122e08?, 0x0?, 0x0?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:549 +0x139 fp=0xc10dc01508 sp=0xc10dc00fd8 pc=0x68c5d9
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122e08?, 0x1?, 0x1?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc01a38 sp=0xc10dc01508 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122e00?, 0x0?, 0x0?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc01f68 sp=0xc10dc01a38 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122df8?, 0x60?, 0x2e?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc02498 sp=0xc10dc01f68 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122df0?, 0x40b30b?, 0x15440e0?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc029c8 sp=0xc10dc02498 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122de8?, 0xc18f4c2a20?, 0x60?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc02ef8 sp=0xc10dc029c8 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122de0?, 0x5?, 0x14c?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc03428 sp=0xc10dc02ef8 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseTernaryConditional(0xc10dc2adf8)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:495 +0xfe fp=0xc10dc039c8 sp=0xc10dc03428 pc=0x68b89e
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).ParseExpression(...)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:479
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseExpressionTerm(0xc10dc2adf8)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:953 +0x233 fp=0xc10dc04920 sp=0xc10dc039c8 pc=0x692693
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseExpressionWithTraversals(0xc000680000?)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:619 +0x25 fp=0xc10dc049a0 sp=0xc10dc04920 pc=0x68d125
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0x13f5c80?, {0xc000122e08?, 0x5d?, 0x20c?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:549 +0x139 fp=0xc10dc04ed0 sp=0xc10dc049a0 pc=0x68c5d9
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122e08?, 0x5d?, 0x20c?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc05400 sp=0xc10dc04ed0 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122e00?, 0x5d?, 0xc10dc05a00?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc05930 sp=0xc10dc05400 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122df8?, 0x5d?, 0x0?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc05e60 sp=0xc10dc05930 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122df0?, 0x5d?, 0x20c?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc06390 sp=0xc10dc05e60 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122de8?, 0x5d?, 0x20c?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc068c0 sp=0xc10dc06390 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122de0?, 0x1?, 0x14d?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc06df0 sp=0xc10dc068c0 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseTernaryConditional(0xc10dc2adf8)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:495 +0xfe fp=0xc10dc07390 sp=0xc10dc06df0 pc=0x68b89e
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).ParseExpression(...)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:479
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseTupleCons(0xc10dc2adf8)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:1285 +0x514 fp=0xc10dc07a30 sp=0xc10dc07390 pc=0x696474
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseExpressionTerm(0xc10dc2adf8)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:1089 +0x1d33 fp=0xc10dc08988 sp=0xc10dc07a30 pc=0x694193
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseExpressionWithTraversals(0x29?)
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:619 +0x25 fp=0xc10dc08a08 sp=0xc10dc08988 pc=0x68d125
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0x13f5c80?, {0xc000122e08?, 0x3d?, 0x161?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:549 +0x139 fp=0xc10dc08f38 sp=0xc10dc08a08 pc=0x68c5d9
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122e08?, 0x3d?, 0x161?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc09468 sp=0xc10dc08f38 pc=0x68c525
github.com/hashicorp/hcl/v2/hclsyntax.(*parser).parseBinaryOps(0xc10dc2adf8, {0xc000122e00?, 0x3d?, 0x161?})
	/home/circleci/go/pkg/mod/github.com/hashicorp/hcl/[email protected]/hclsyntax/parser.go:563 +0x85 fp=0xc10dc09998 sp=0xc10dc09468 pc=0x68c525

State file for multi-region deployment

Let's say I have the following directory structure. The top most terragrunt.hcl under account-2345 has an S3 bucket parameter associated with the remote state file. So let's say I set it to us-east-1. Now, let's say us-east-1 is down, my environment in us-west-2 is still running and i shift my services over there. I need to update my mysql cluster and therefore need to modify my TF code. How would I do this if I can't get to my state file?

Would this require me to replicate the bucket to us-west-2 and modify my terragrunt.hcl file to point to that bucket?

account-2345/
├── terragrunt.hcl
└── us-east-1/
    └── prod/
        ├── mysql/
        │   └── terragrunt.hcl
        └── webserver-cluster/
            └── terragrunt.hcl
└── us-west-2/
    └── prod/
        ├── mysql/
        │   └── terragrunt.hcl
        └── webserver-cluster/
            └── terragrunt.hcl

Sample code to extract variable value from hcl

Hi,
Can we have sample code (.tf) file to understand how these variable values can be floated to terraform code. ? I see in this repo all hcl files and their structure all along the child folders . However how these variables can be easily extracted in code is not shown.

Please add couple code examples

Thanks

Module dependencies example

based on this example here a created two modules that have dependency to each other. DB to APP.

RDS Module has dependencies to my App Module with vpc_id and subnets.

Based on dependencies section in the docs I created to following setup:
But when I run plan-all in stage I get asked to enter vpc_id and subnets.which should not be the case.

myProject: 
  modules:
    app:
      - main.tf
      - terragrunt.hcl # empty file
    rds:
      - main.tf
      - terragrunt.hcl 
  prod:
  stage:
    eu-west-1:
      db:
        - terragrunt.hcl

The content of myProject/modules/rds/terragrunt.hcl is

dependency "vpc" {
  config_path = "../app"
}

inputs = {
  vpc_id     = dependency.vpc.outputs.vpc_id
  subnet_ids = dependency.vpc.outputs.private_subnets
}

This is the content of my myProject/stage/eu-west-1/db/terragrunt.hcl

locals {
  environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))
  region_vars = read_terragrunt_config(find_in_parent_folders("region.hcl"))
  env        = local.environment_vars.locals.environment
  aws_region = local.region_vars.locals.aws_region
}

terraform {
  source = "${get_parent_terragrunt_dir()}/modules/rds/"
}
include {
  path = find_in_parent_folders()
}

inputs = {
  name = "sample-rds"
}

Maybe you could extend your example project and add module dependencies.

Errors running the examples out-of-the-box: Classic Load Balancers do not support Availability Zone 'us-east-1f'

Hi, can anybody suggest whats going wrong here? The IAM role associated with the key has full administrator access btw... Thanks

←[1m←[31mError: ←[0m←[0m←[1mError applying plan:
2 error(s) occurred:

  • aws_elb.webserver_example: 1 error(s) occurred:

  • aws_elb.webserver_example: InvalidConfigurationRequest: Classic Load Balancers do not support Availability Zone 'us-east-1f'.
    status code: 409, request id: 8784f5f0-d2c8-11e8-a52c-93b5e336c90a

  • aws_security_group_rule.elb_allow_all_outbound: 1 error(s) occurred:

  • aws_security_group_rule.elb_allow_all_outbound: Error authorizing security group rule type egress: InvalidParameterVal
    ue: Only Amazon VPC security groups may be used with this operation.
    status code: 400, request id: 5bb3fc81-26f5-4be5-af82-405861757905

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.←[0m

←[0m←[0m←[0m
[terragrunt] 2018/10/18 12:25:35 Hit multiple errors:
exit status 1

How to inherit parent terragrunt.hcl inputs block

In prod/terragrunt.hcl there's an inputs block like so:

# Configure root level variables that all resources can inherit. This is especially helpful with multi-account configs
# where terraform_remote_state data sources are placed directly into the modules.
inputs = merge(
  # Configure Terragrunt to use common vars encoded as yaml to help you keep often-repeated variables (e.g., account ID)
  # DRY. We use yamldecode to merge the maps into the inputs, as opposed to using varfiles due to a restriction in
  # Terraform >=0.12 that all vars must be defined as variable blocks in modules. Terragrunt inputs are not affected by
  # this restriction.
  yamldecode(
    file("${get_terragrunt_dir()}/${find_in_parent_folders("region.yaml", local.default_yaml_path)}"),
  ),
  yamldecode(
    file("${get_terragrunt_dir()}/${find_in_parent_folders("env.yaml", local.default_yaml_path)}"),
  ),
  {
    aws_profile                  = "prod"
  },
)

How can children terragrunt.hcl files reference the variables such as env and region?

[QUESTION] Why we need duplicate read_terragrunt_config block in childs?

In example in root terragrunt.hcl we have block

locals {
  # Automatically load account-level variables
  account_vars = read_terragrunt_config(find_in_parent_folders("account.hcl"))

  # Automatically load region-level variables
  region_vars = read_terragrunt_config(find_in_parent_folders("region.hcl"))

  # Automatically load environment-level variables
  environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))

  # Extract the variables we need for easy access
  account_name = local.account_vars.locals.account_name
  account_id   = local.account_vars.locals.aws_account_id
  aws_region   = local.region_vars.locals.aws_region
}
....
inputs = merge(
  local.account_vars.locals,
  local.region_vars.locals,
  local.environment_vars.locals,
)

and in child non-prod/us-east-1/qa/mysql/terragrunt.hcl we have also

locals {
  # Automatically load environment-level variables
  environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))

  # Extract out common variables for reuse
  env = local.environment_vars.locals.environment
}

Why we need duplicate read_terragrunt_config block for env.hcl? I think that inputs = merge configure root level variables that all resources can inherit?

[Question] How do I set different iam_role for prod and non-prod folders ?

How do I set different iam_role for prod and non-prod folders?

considering this PR from 2018 => https://github.com/gruntwork-io/terragrunt/pull/599/files#diff-04c6e90faac2675aa89e2176d2eec7d8 it seems that I can configure a specific iam_role to be used by terragrunt (and terraform).

My goal is to have a CI/CD (Atlantis) to assume roles when executing the terragrunt command.

How can I set up one role for non-prod folder and a different one for the prod folder?

Getting error for "AWS Account ID not allowed"

Hi I have an issue trying to run this out of the box - I get the error AWS Account ID not allowed:
After some digging around I understood that I need to change account.hcl

I was wondering about two things -

  1. Should this be added to the README as it is not mentioned anywhere except the comment in the file itself?
  2. Is there a way to inject this var using an environment variable or something like that, so it wouldn't be necessary to change the files?

Thanks

looks like this works only with Terraform 0.12.29

trying with 0.12.24 fails
trying with 0.13.0 fails as well, getting an error which says that you should use 0.12.29

so I suggest:

  1. support other terraform versions
    or
  2. fix the documentation readme

[QUESTION] Including a _global folder

I am using this repository as an example to create my infrastructure and I am really appreciating it!
In the documentation, it is mentioned that you could include a _global folder within the account folder perspective:

account
_global
us-east-1

When I run terragrunt plan from within the _global folder based on this set-up, terragrunt mentions ParentFileNotFound, could not find a region.hcl in any of the parent folders.

This makes sense, as the root terragrunt.hcl tries to open region.hcl and its entries.

To keep the main code intact, I have now created an extra folder, with the region as us-east-1, as a workaround, like this:

account
_global
region.hcl
_global

This works, but it doesn't seem like a clean way. What do you think is the right approach to mitigate this?

Represent map in the yaml file

Please share how we can mention a map variable like tags in the xxx.yaml file.

I tried with the below but it failed with the error below

xxx.yaml

tags : 
  Terraform   : "true"
  Project     : "ics-dlt"
  Environment : "common"

error


  on .terraform/modules/remote_state/dynamo.tf line 21, in resource "aws_dynamodb_table" "lock":
  21:   tags = var.tags
    |----------------
    | var.tags is "{\"Environment\":\"common\",\"Project\":\"ics-dlt\",\"Terraform\":\"true\"}"

Inappropriate value for attribute "tags": map of string required.

How to access global variables from application/service layer

In main terragrunt.hcl file are merged following:

# Configure root level variables that all resources can inherit. This is especially helpful with multi-account configs
# where terraform_remote_state data sources are placed directly into the modules.
inputs = merge(
  local.account_vars.locals,
  local.region_vars.locals,
  local.environment_vars.locals,
)

But in the example they are never accessed, how to access them from subdirs?

Unused bucket and region variables in account.tfvars

account.tfvars

# Root level variables that all modules can inherit. This is especially helpful with multi-account configs
# where terraform_remote_state data sources are placed directly into the modules.

tfstate_global_bucket        = "terragrunt-state-global-non-prod"
tfstate_global_bucket_region = "us-east-1"
aws_profile                  = "non-prod"

Were tfstate_global_bucket and tfstate_global_bucket_region to be used in terraform.tfvars

Input dependency value with if statement

I checked documentation but cannot find answer.

Is it possible to pass conditional dependency?

dependency "eks" {
  config_path = "${dirname(find_in_parent_folders("env.hcl"))}/eks"

  mock_outputs = {
    cluster_arn  = "temporary-cluster-arn"
    cluster_id   = "temporary-cluster-id"
    cluster_name = "temporary-cluster-name"
  }
}

inputs = {
  cluster_name   = "${dependency.eks.outputs.cluster_name}" == "temporary-cluster-name" : "${dependency.eks.outputs.cluster_id}" ? "${dependency.eks.outputs.cluster_name}"
}

Getting Error:

Missing attribute separator; Expected a newline or comma to mark the beginning of the next attribute.
ERRO[0000] Unable to determine underlying exit code, so Terragrunt will exit with error code 1

The idea is to pass cluster_id if cluster_name is not returned.
There is a new version of module where cluster_id is replaced with cluster_name.
I have multiple env. running and cannot change yet all in one go. Looking for solution.

Path duplication on lookup

I am trying to run this example code here:

in /Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1

I execute terragrunt plan-all and get the error that the files can't be found. I seems, because the lookup is in the wrong path.

prod/us-east-1/stage/webserver-cluster/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/region.yaml., and 1 other diagnostic(s)
[terragrunt] 2020/03/05 15:21:37 Encountered the following errors:
/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/terragrunt.hcl:38,5-10: Error in function call; Call to function "file" failed: no file exists at /Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/qa/mysql/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/region.yaml., and 1 other diagnostic(s)
/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/terragrunt.hcl:38,5-10: Error in function call; Call to function "file" failed: no file exists at /Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/qa/webserver-cluster/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/region.yaml., and 1 other diagnostic(s)
/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/terragrunt.hcl:38,5-10: Error in function call; Call to function "file" failed: no file exists at /Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/stage/mysql/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/region.yaml., and 1 other diagnostic(s)
/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/terragrunt.hcl:38,5-10: Error in function call; Call to function "file" failed: no file exists at /Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/stage/webserver-cluster/Users/vadim/Development/temp/terragrunt-infrastructure-live-example/non-prod/us-east-1/region.yaml., and 1 other diagnostic(s)
Terraform v0.12.21
terragrunt version v0.23.0

MacOS Monterey - bad CPU type in executable: terragrunt

Describe the bug
Downloaded the latest terragrunt binary , renamed to terragrunt, added execute permissions and placed the binary at my PATH.
However, terragrunt does not run.
Error: zsh: bad CPU type in executable: terragrunt

Expected behavior
Terragrunt should be installed and terragrunt commands should run.

terragrunt apply asking for variable input

Hello,

I did a very similiar project (basically copy/pasted this one) but I have a problem when executing terragrunt apply in a specific module implementation (example in non-prod/us-east-1/qa/mysql), terragrunt is asking me for variables input of values that are already defined in current level terraform.tfvars (here for example instance_class, allocated_storage, etc).

When I see the details of the executed terraform command, it seems that in fact the current directory .tfvars file is not taken:

terraform apply -var-file=/home/[....]/mariadb/../environment.tfvars -var-file=/home/[...]/mariadb/../../global.tfvars

I think probably there should be a -va-file=terraform.tfvars but I don't know why it's not included.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.