Giter Club home page Giter Club logo

caf-terraform-landingzones's Introduction

landingzones

Gitter

Azure Terraform SRE

Azure Terraform SRE (formely CAF Terraform) ambitions:

  • Equip the Site Reliability Engineering teams for Terraform on Azure.
  • Democratize an IaC: Infrastructure-as-Configuration.
  • Commoditize state management and enterprise-wide composition.
  • Standardize deployments leveraging official Azure landing zones components.
  • Propose a prescriptive guidance on how to enable DevOps for infrastructure as code on Microsoft Azure.
  • Foster a community of Azure Terraformers using a common set of practices and sharing best practices.

You can review the different components parts of the Azure Terraform SRE and look at the quick intro video below:

caf_elements

๐Ÿš€ Getting started

When starting an enterprise deployment, we recommend you start creating a configuration repository where you craft the configuration files for your environments.

The best way to start is to clone the platform starter repository and getting started with the configuration files.

If you are reading this, you are probably interested also in reading the doc as below: :books: Read our centralized documentation page

Community

Feel free to open an issue for feature or bug, or to submit a PR.

In case you have any question, you can reach out to tf-landingzones at microsoft dot com.

You can also reach us on Gitter

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

Code of conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

caf-terraform-landingzones's People

Contributors

arnaudlh avatar berendvw avatar bobalong79 avatar brk3 avatar chianw avatar chinmh avatar davesee avatar eedorenko avatar genegc avatar heintonny avatar hieumoscow avatar hyperfocus1337 avatar ianlim-cldcvr avatar iriahk89 avatar jorseng avatar laurentlesle avatar leethanh2112 avatar mingheng92 avatar naeemdhby avatar nunocenteno avatar nyuen avatar pavgup avatar raffertyuy avatar raketham avatar ralacher avatar rickychew77 avatar seanlok avatar sharmilamusunuru avatar swetha-sundar avatar tschwarz01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

caf-terraform-landingzones's Issues

[bug] error while deploying foundations with launchpad_light

Describe the bug
Error: Error retrieving Log Analytics Solution "KeyVaultAnalytics(elmp-la-caflalogs-sg-dBHLpbzyOyfh9LGjPH2BVOAH0oVqq9CASOjGVIOW0k)" (Resource Group "elmp-rg-hub-operations-weu-yYEbO0OnzTIE2ERG1MYXp9gRDVL8GC3oDUSa8kLREqVgG9MTUG1pi"): operationsmanagement.SolutionsClient#Get: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: error response cannot be parsed: ""Solution Not Found : solutionType KeyVaultAnalytics, workspace elmp-la-caflalogs-sg-dBHLpbzyOyfh9LGjPH2BVOAH0oVqq9CASOjGVIOW0k"" error: json: cannot unmarshal string into Go value of type azure.RequestError

on /home/vscode/.terraform.cache/modules/foundations_accounting.log_analytics/module.tf line 23, in resource "azurerm_log_analytics_solution" "la_solution":
23: resource "azurerm_log_analytics_solution" "la_solution" {

To Reproduce
rover -lz /tf/caf/landingzones/landingzone_caf_foundations -a apply

Expected behavior
No error

Screenshots

Configuration (please complete the following information):

  • OS and version:WSL2 with Ubuntu 20
  • Version of the rover: aztfmod/rover:2009.0210
  • Version of the landing zone : how to get it ?

Additional context
Here is the .tfvars file

# Sample Cloud Adoption Framework foundations landing zone

## globalsettings
global_settings = {
  #specifies the set of locations you are going to use in this landing zone
  location_map = {
    region1 = "westeurope"
  }

  #naming convention to be used as defined in naming convention module, accepted values are cafclassic, cafrandom, random, passthrough
  convention = "cafrandom"

  #Set of tags for core operations
  tags_hub = {
    owner          = "CAF"
    deploymentType = "Terraform"
    costCenter     = "1664"
    BusinessUnit   = "SHARED"
    DR             = "NON-DR-ENABLED"
  }

  # Set of resource groups to land the foundations
  resource_groups_hub = {
    region1 = {
      HUB-CORE-SEC = {
        name     = "hub-core-sec-weu"
        location = "westeurope"
      }
      HUB-OPERATIONS = {
        name     = "hub-operations-weu"
        location = "westeurope"
      }
    }
  }
}

## accounting settings
accounting_settings = {

  # Azure diagnostics logs retention period
  region1 = {
    # Azure Subscription activity logs retention period
    azure_activity_log_enabled    = false
    azure_activity_logs_name      = "actlogs"
    azure_activity_logs_event_hub = false
    azure_activity_logs_retention = 365
    azure_activity_audit = {
      log = [
        # ["Audit category name",  "Audit enabled)"] 
        ["Administrative", true],
        ["Security", true],
        ["ServiceHealth", true],
        ["Alert", true],
        ["Recommendation", true],
        ["Policy", true],
        ["Autoscale", true],
        ["ResourceHealth", true],
      ]
    }
    azure_diagnostics_logs_name      = "diaglogs"
    azure_diagnostics_logs_event_hub = false

    #Logging and monitoring 
    analytics_workspace_name = "caflalogs-sg"

    ##Log analytics solutions to be deployed 
    solution_plan_map = {
      NetworkMonitoring = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/NetworkMonitoring"
      },
      ADAssessment = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ADAssessment"
      },
      ADReplication = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ADReplication"
      },
      AgentHealthAssessment = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/AgentHealthAssessment"
      },
      DnsAnalytics = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/DnsAnalytics"
      },
      ContainerInsights = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ContainerInsights"
      },
      KeyVaultAnalytics = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/KeyVaultAnalytics"
      }
    }
  }
}

## governance
governance_settings = {
  region1 = {
    #current code supports only two levels of managemenr groups and one root
    deploy_mgmt_groups = false
    management_groups = {
      root = {
        name          = "caf-rootmgmtgroup"
        subscriptions = []
        #list your subscriptions ID in this field as ["GUID1", "GUID2"]
        children = {
          child1 = {
            name          = "tree1child1"
            subscriptions = []
          }
          child2 = {
            name          = "tree1child2"
            subscriptions = []
          }
          child3 = {
            name          = "tree1child3"
            subscriptions = []
          }
        }
      }
    }

    policy_matrix = {
      #autoenroll_asc          = true - to be implemented via builtin policies
      autoenroll_monitor_vm = true
      autoenroll_netwatcher = false

      no_public_ip_spoke     = false
      cant_create_ip_spoke   = false
      managed_disks_only     = true
      restrict_locations     = false
      list_of_allowed_locs   = ["westeurope"]
      restrict_supported_svc = false
      list_of_supported_svc  = ["Microsoft.Network/publicIPAddresses", "Microsoft.Compute/disks"]
      msi_location           = "westeurope"
    }
  }
}

## security 
security_settings = {
  #Azure Security Center Configuration 
  enable_security_center = false
  security_center = {
    contact_email       = "[email protected]"
    contact_phone       = ""
    alerts_to_admins    = true
    alert_notifications = true
  }
  #Enables Azure Sentinel on the Log Analaytics repo
  enable_sentinel = true
}

[feature] Implement Azure Monitor alerts

Scenario

Implement prototype of Azure Monitor alerts to demonstrate capabilities.

This should be provided per landing zone and each landing zone is responsible of its alerting triggers in the first release.

  • landingzone_vdc_demo
  • landingzone_hub_spoke
  • landingzone_caf_foundations
  • landingzone_secure_vnet_dmz
  • landingzone_hub_mesh

Proposal for landingzone_caf_foundations:

Implement Azure Monitor - Service Health Alerts

rover landingzone terraform destroy not working with service principal

Describe the bug

I have started the azure session with service principal and created resources in launchpad.But when i try to destroy the resources created with the same service principal , rover complains that it's not the same user which initialized the launchpad landing zone.

To Reproduce
start the azure session using service principal .
az login --service-principal -u '$(ARM_CLIENT_ID)' -p '$(ARM_CLIENT_SECRET)' --tenant '$(ARM_TENANT_ID)'
az account set -s $(ARM_SUBSCRIPTION_ID)
export ARM_CLIENT_ID=$(ARM_CLIENT_ID)
export ARM_CLIENT_SECRET=$(ARM_CLIENT_SECRET)
export ARM_TENANT_ID=$(ARM_TENANT_ID)
export ARM_SUBSCRIPTION_ID=$(ARM_SUBSCRIPTION_ID)

rover -lz /tf/caf/landingzones/launchpad/ -a apply -launchpad -env Dev
rover -lz /tf/caf/landingzones/launchpad/ -a destroy -launchpad -env Dev

Expected behavior
destroy the resources and no error.

Screenshots
image

Configuration
Version of the rover: aztfmod/rover:2009.0210
Version of the landing zone : v8.0.2001

Enable deployment of landing zones from Rover to multiple subscriptions

Is your feature request related to a problem? Please describe.
It seems that the Launchpad (Rover) and CAF Terraform Landing Zone examples are based on a single subscription model.

There are scenarios where it would make sense to support deployment of landing zone to a different subscription, for example setting up central logging and monitoring ("caf foundations"), so they would reside in separate subscription from production subscription. This could also include something like logging policies enforced on Management group level.

Are there plans to support deployment over several subscriptions from a single Launchpad (Rover)?

Describe the solution you'd like

rover /tf/caf/landingzones/landingzone_caf_foundations plan -var 'target_subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'

Describe alternatives you've considered

Currently, it seems that one should stick with just one "primary subscription". Launchpad (Level 0) is tied to a single subscription and the other Levels along with it.

If deploying resources to multiple subscriptions they're considered separate environments and each have their own launchpads.

Additional context

Single subscription model:
https://docs.microsoft.com/fi-fi/azure/cloud-adoption-framework/ready/landing-zone/terraform-landing-zone#design-decisions

Networking for spokes:
https://github.com/Azure/caf-terraform-landingzones/blob/master/landingzones/landingzone_hub_spoke/readme.md#introduction-to-hub-spoke-network-topology-in-azure-landing-zone

Level 200-single-region-hub - Fails when deploying diagnostic settings

Level 200-single-region-hub - Fails when deploying diagnostic settings.

Steps to Reproduce.

Following Getting Started Documents:

https://github.com/Azure/caf-terraform-landingzones/blob/master/documentation/getting_started/getting_started.md

Deployed Launchpad
Deployed Foundations
Attempted to deploy level 200-single-region-hub scenario

rover -lz /tf/caf/landingzones/caf_networking
-level level2
-var-folder /tf/caf/landingzones/caf_networking/scenario/200-single-region-hub
-a plan

Multiple errors all related to diagnostic definitions

Error: Error in function call

on /home/vscode/.terraform.cache/modules/networking/modules/diagnostics/module.tf line 8, in resource "azurerm_monitor_diagnostic_setting" "diagnostics":
8: name = try(format("%s%s", try(var.global_settings.prefix_with_hyphen, ""), each.value.name), format("%s%s", try(var.global_settings.prefix_with_hyphen, ""), var.diagnostics.diagnostics_definition[each.value.definition_key].name))
|----------------
| each.value is object with 3 attributes
| each.value.definition_key is "network_security_group"
| var.diagnostics.diagnostics_definition is object with 3 attributes
| var.global_settings is object with no attributes

Call to function "try" failed: no expression succeeded:

  • Unsupported attribute (at
    /home/vscode/.terraform.cache/modules/networking/modules/diagnostics/module.tf:8,102-107)
    This object does not have an attribute named "name".
  • Invalid index (at
    /home/vscode/.terraform.cache/modules/networking/modules/diagnostics/module.tf:8,212-239)
    The given key does not identify an element in this collection value.

At least one expression must produce a successful result.

Expected behavior
Example scenario should work

Rover CAF foundation plan fails with authorization error

Describe the bug
Following the steps in https://github.com/Azure/caf-terraform-landingzones/blob/master/documentation/getting_started/getting_started.md.
Rover is downloaded and integrated with VS code. Rover logs in to correct subscription (my internal Azure sub).
launchpad /tf/launchpads/launchpad_opensource_light apply --> selected westeurope
While doing the rover /tf/caf/landingzones/landingzone_caf_foundations plan --> error comes for authentication

To Reproduce
2020-06-19T02:08:23.888Z [DEBUG] plugin.terraform-provider-azurerm_v2.8.0_x5: X-Ms-Keyvault-Region: westeurope
2020-06-19T02:08:23.888Z [DEBUG] plugin.terraform-provider-azurerm_v2.8.0_x5: X-Ms-Keyvault-Service-Version: 1.1.6.0
2020-06-19T02:08:23.888Z [DEBUG] plugin.terraform-provider-azurerm_v2.8.0_x5: X-Ms-Request-Id: 50ee2acc-5233-4d4e-b672-acc306b7fe2d
2020-06-19T02:08:23.888Z [DEBUG] plugin.terraform-provider-azurerm_v2.8.0_x5: X-Powered-By: ASP.NET
2020-06-19T02:08:23.888Z [DEBUG] plugin.terraform-provider-azurerm_v2.8.0_x5:
2020-06-19T02:08:23.888Z [DEBUG] plugin.terraform-provider-azurerm_v2.8.0_x5: {"value":"72f988bf-86f1-41af-91ab-2d7cd011db47","contentType":"","id":"https://rvsvs-kv-level0-b0k1rfjf.vault.azure.net/secrets/launchpad-tenant-id/e2ca2cc7f3ee421ab5fb160c18b409b1","attributes":{"enabled":true,"created":1592562911,"updated":1592562911,"recoveryLevel":"Purgeable"},"tags":{}}
2020/06/19 02:08:23 [TRACE] : eval: *terraform.EvalWriteState
2020/06/19 02:08:23 [TRACE] EvalWriteState: recording 10 dependencies for azurerm_key_vault_secret.launchpad_tenant_id
2020/06/19 02:08:23 [TRACE] EvalWriteState: writing current state object for azurerm_key_vault_secret.launchpad_tenant_id
2020/06/19 02:08:23 [TRACE] [walkRefresh] Exiting eval tree: azurerm_key_vault_secret.launchpad_tenant_id
2020/06/19 02:08:23 [TRACE] vertex "azurerm_key_vault_secret.launchpad_tenant_id": visit complete
2020/06/19 02:08:23 [TRACE] vertex "azurerm_key_vault_secret.launchpad_tenant_id": dynamic subgraph completed successfully
2020/06/19 02:08:23 [TRACE] vertex "azurerm_key_vault_secret.launchpad_tenant_id": visit complete
2020/06/19 02:08:23 [TRACE] dag/walk: upstream of "provider.azurerm (close)" errored, so skipping
2020/06/19 02:08:23 [TRACE] dag/walk: upstream of "root" errored, so skipping
2020/06/19 02:08:23 [TRACE] statemgr.Filesystem: removing lock metadata file /home/vscode/.terraform.cache/tfstates/level0/.launchpad_opensource_light.tfstate.lock.info
2020/06/19 02:08:23 [TRACE] statemgr.Filesystem: unlocking /home/vscode/.terraform.cache/tfstates/level0/launchpad_opensource_light.tfstate using fcntl flock
Error: Error reading queue properties for AzureRM Storage Account "rvsvsstdiagykpwt2idndntq": queues.Client#GetServiceProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthenticationFailed" Message="Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:68da4bbe-3003-0056-2b26-4654c0000000\nTime:2020-06-19T10:44:57.3884742Z"

2020-06-19T02:08:23.930Z [DEBUG] plugin: plugin process exited: path=/home/vscode/.terraform.cache/plugins/linux_amd64/terraform-provider-azurerm_v2.8.0_x5 pid=4970
2020-06-19T02:08:23.931Z [DEBUG] plugin: plugin exited
Error on or near line 459: Error running terraform plan; exiting with status 2000

Screenshots
image

Configuration (please complete the following information):

  • Version of the rover[e.g. 22]: aztfmod/rover:2005.1510
  • Version of the landing zone[e.g. 11]:

Additional context
While I am in my correct subscription and have generated service principle by using az ad sp create-for-rbac, where does the authentication fail?

Possible Bug in Launchpad_Light deploy, likely due to operator error.

Describe the bug
After an initial apply and destroy, I changed variables in Launchpad variables.tf for naming convention (from cafrandom to cafclassic) and location (from southeastasia to westus), then ran "apply." All resources deleted and recreated correctly, from what I can tell. However, I received "Error releasing the state lock," which is pointed at the original "launchpad.tfstate" file on the (now-deleted) randomly named key vault. Am I ok to simply ignore?

Authentication failure when launch launchpad /tf/launchpads/launchpad_opensource_light apply -var 'location=southeastasia'

azurerm_storage_account.stg: Still creating... [9m10s elapsed]
module.diagnostics.azurerm_storage_account.log: Still creating... [9m10s elapsed]

Error: Error reading queue properties for AzureRM Storage Account "sttfstate7jqzn33ii0iwrl9": queues.Client#GetServiceProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthenticationFailed" Message="Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:36b4f7b6-d003-00a1-2ace-ea60af000000\nTime:2020-02-24T04:53:18.7121287Z"

on storage.tf line 17, in resource "azurerm_storage_account" "stg":
17: resource "azurerm_storage_account" "stg" {

Error: Error reading queue properties for AzureRM Storage Account "stdiag10voa5yc5q9c02stly": queues.Client#GetServiceProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthenticationFailed" Message="Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:c4d94076-6003-003b-6dce-eaec6a000000\nTime:2020-02-24T04:53:18.9083854Z"

on /home/vscode/.terraform.cache/modules/diagnostics/aztfmod-terraform-azurerm-caf-diagnostics-logging-3e5f516/module.tf line 21, in resource "azurerm_storage_account" "log":
21: resource "azurerm_storage_account" "log" {

Calling workspace_create function
Create sandpit workspace

Authentication failure. This may be caused by either invalid account key, connection string or sas token value provided for your storage account.

Dev Container fail, not able to run rover on WSL2 [bug]

When i start the Dev Container on my VS code connect to WSL2, i got failure as following:
docker: Error response from daemon: \wsl\Ubuntu-18.04\home\rofa\myWork%!(EXTRA
string=is not a valid Windows path).

I am not able to start Dev Container. two questions:

  1. How do I use Rover without Dev Container?
  2. Can I run tf directly without using Rover?

Thanks!

CAF Foundation landing zone - deployment error

Describe the bug
While deploying caf-terraform-landingzones , I get an error that the resources already exit.
Error: A resource with the ID "/subscriptions/18ab9f36-e50d-482f-919a-bbfb490d4f4c/providers/Microsoft.Authorization/policyAssignments/vm_auto_monitor" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_policy_assignment" for more information.

on blueprint_foundations_governance/policies/builtin/enable_az_monitor.tf line 4, in resource "azurerm_policy_assignment" "vm_auto_monitor":
4: resource "azurerm_policy_assignment" "vm_auto_monitor" {

Error: A resource with the ID "/subscriptions/18ab9f36-e50d-482f-919a-bbfb490d4f4c/providers/Microsoft.Authorization/policyAssignments/vm_no_managed_disks" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_policy_assignment" for more information.

on blueprint_foundations_governance/policies/builtin/managed_disks.tf line 4, in resource "azurerm_policy_assignment" "pol_managed_disks_assignment":
4: resource "azurerm_policy_assignment" "pol_managed_disks_assignment" {

Error: A resource with the ID "/subscriptions/18ab9f36-e50d-482f-919a-bbfb490d4f4c/providers/microsoft.insights/logprofiles/default" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_monitor_log_profile" for more information.

on /home/vscode/.terraform.cache/modules/blueprint_foundations_accounting.activity_logs/terraform-azurerm-caf-activity-logs-2.0/module.tf line 47, in resource "azurerm_monitor_log_profile" "subscription":
47: resource "azurerm_monitor_log_profile" "subscription" {

Error on or near line 483: Error running terraform apply; exiting with status 2001

I have checked my subscription and nothing was deployed in the southeastasia region and I had only resources deployed by preparation steps which are from:
rover /tf/caf/landingzones/launchpad apply -launchpad
Subscription resources Status prior to CAF foundation landing zone deployment
image

Why does the error report that vm_auto_monitor, vm_no_managed_disks and logprofiles already exists?
I have destroyed the whole deployment and started fresh as well but it says the same.

please help.

thanks,
Manavi

Use azurerm_key_vault_access_policy to set access policy

Is your feature request related to a problem? Please describe.
The current launchpad configure the keyvault access_policy using azurerm_key_vault instead of using a separate resource called azurerm_key_vault_access_policy. This is preventing the addition of other users to the keyvault for access after the launchpad has been deployed using terraform code in other Lx blueprints.

The current launchpad does not provide a good solution to manage multi-user access for code update when not using CI/CD. I had to implement separate AAD Security Groups to manage members using a L1 blueprint and assign this Security Group to the tfstate RG for Storage Blob Data Contributor access. The issue is that I can't do the same with the current code structure for the keyvault store. (well, technically you can but the azurerm terraform provider documentation clearly indicate that using both azurerm_key_vault and azurerm_key_vault_access_policy will cause issues).

Describe the solution you'd like
Move the access_policy block from the azurerm_key_vault to a seperate azurerm_key_vault_access_policy resource

Describe alternatives you've considered
Manually add the other user access policy via the portal... but I would rather do that properly through code to manage who has access to launchpad resources.

Additional context
When trying to destroy a launchpad as a different user than the one who created it I was getting:

- keyvault_name: kv-launchpad-hc26vvr07am
The user, group or application 'appid=04b07795-8ddb-461a-bbee-02f9e1bf7b46;oid=446fe20e-1b5c-45fc-a04e-5ae7cfb66684;numgroups=13;iss=https://sts.windows.net/4e1ed7ae-062e-4ec8-b989-de8cbd452c54/' does not have secrets get permission on key vault 'kv-launchpad-hc26vvr07am;location=canadacentral'. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287
 - Name: 
Error on or near line 326: Not authorized to manage landingzones. User must be member of the security group to access the launchpad and deploy a landing zone; exiting with status 102

[bug] 300 scenario not working

Describe the bug
Attempted to run the 300 scenario in vnext branch and it would not plan

To Reproduce

[vscode@a8f4f57d9e9e caf]$ rover -lz /tf/caf/landingzones/caf_launchpad -launchpad -var-file /tf/caf/landingzones/caf_launchpad/scenario/300/configuration.tfvars -parallelism=30 -a plan

  /$$$$$$   /$$$$$$  /$$$$$$$$       /$$$$$$$                                        
 /$$__  $$ /$$__  $$| $$_____/      | $$__  $$                                       
| $$  \__/| $$  \ $$| $$            | $$  \ $$  /$$$$$$  /$$    /$$/$$$$$$   /$$$$$$ 
| $$      | $$$$$$$$| $$$$$         | $$$$$$$/ /$$__  $$|  $$  /$$/$$__  $$ /$$__  $$
| $$      | $$__  $$| $$__/         | $$__  $$| $$  \ $$ \  $$/$$/ $$$$$$$$| $$  \__/
| $$    $$| $$  | $$| $$            | $$  \ $$| $$  | $$  \  $$$/| $$_____/| $$      
|  $$$$$$/| $$  | $$| $$            | $$  | $$|  $$$$$$/   \  $/ |  $$$$$$$| $$      
 \______/ |__/  |__/|__/            |__/  |__/ \______/     \_/   \_______/|__/      
                                                                                     
                                                                                                                                                           
              version: aztfmod/roveralpha:2009.180404


mode                          : 'launchpad'
terraform command output file : ''
tf_action                     : 'plan'
command and parameters        : '-var-file /tf/caf/landingzones/caf_launchpad/scenario/300/configuration.tfvars -parallelism=30'
level (current)               : 'level0'
environment                   : 'sandpit'
workspace                     : 'tfstate'
tfstate                       : 'caf_launchpad.tfstate'

@calling process_actions
@calling verify_azure_session
Checking existing Azure session
@calling verify_parameters
landingzone                   : '/tf/caf/landingzones/caf_launchpad'
@calling get_storage_id
No launchpad found.
Deploying from scratch the launchpad
@calling initialize_state
Installing launchpad from /tf/caf/landingzones/caf_launchpad
@calling_get_logged_user_object_id
 - logged in user objectId: 59547027-f863-4273-bfe4-11d352282a41 (patrick.picard_sourcedgroup.com#EXT#@sourcedssc002engineering01.onmicrosoft.com)
Initializing state with user: patrick.picard_sourcedgroup.com#EXT#@sourcedssc002engineering01.onmicrosoft.com
Upgrading modules...
Downloading aztfmod/caf-enterprise-scale/azurerm 0.3.8 for launchpad...
- launchpad in /home/vscode/.terraform.cache/modules/launchpad
- launchpad.aks_clusters in /home/vscode/.terraform.cache/modules/launchpad/modules/compute/aks
- launchpad.app_service_environments in /home/vscode/.terraform.cache/modules/launchpad/modules/webapps/ase
- launchpad.app_service_environments.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.app_service_plans in /home/vscode/.terraform.cache/modules/launchpad/modules/webapps/asp
- launchpad.app_services in /home/vscode/.terraform.cache/modules/launchpad/modules/webapps/appservice
- launchpad.application_gateways in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/application_gateway
- launchpad.application_gateways.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.automations in /home/vscode/.terraform.cache/modules/launchpad/modules/automation
- launchpad.automations.diagnostics_automation in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.azuread_applications in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/applications
- launchpad.azuread_groups in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/groups
- launchpad.azuread_groups_members in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/groups_members
- launchpad.azuread_groups_members.group_keys in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/groups_members/member
- launchpad.azuread_groups_members.group_name in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/groups_members/member
- launchpad.azuread_groups_members.object_id in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/groups_members/member
- launchpad.azuread_groups_members.service_principals in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/groups_members/member
- launchpad.azuread_groups_members.user_principal_names in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/groups_members/member
- launchpad.azuread_roles_applications in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/roles
- launchpad.azuread_roles_msi in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/roles
- launchpad.azuread_users in /home/vscode/.terraform.cache/modules/launchpad/modules/azuread/users
- launchpad.azurerm_application_insights in /home/vscode/.terraform.cache/modules/launchpad/modules/app_insights
- launchpad.azurerm_firewall_application_rule_collections in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/firewall_application_rule_collections
- launchpad.azurerm_firewall_nat_rule_collections in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/firewall_nat_rule_collections
- launchpad.azurerm_firewall_network_rule_collections in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/firewall_network_rule_collections
- launchpad.azurerm_firewalls in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/firewall
- launchpad.azurerm_firewalls.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.bastion_host_diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.container_registry in /home/vscode/.terraform.cache/modules/launchpad/modules/compute/container_registry
- launchpad.container_registry.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.container_registry.private_endpoint in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/private_endpoint
- launchpad.custom_roles in /home/vscode/.terraform.cache/modules/launchpad/modules/roles/custom_roles
- launchpad.databricks_workspaces in /home/vscode/.terraform.cache/modules/launchpad/modules/analytics/databricks_workspace
- launchpad.diagnostic_storage_accounts in /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account
- launchpad.diagnostic_storage_accounts.container in /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/container
- launchpad.diagnostic_storage_accounts.container.blob in /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/blob
- launchpad.diagnostic_storage_accounts.data_lake_filesystem in /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/data_lake_filesystem
- launchpad.diagnostic_storage_accounts.private_endpoint in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/private_endpoint
- launchpad.keyvault_access_policies in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies
- launchpad.keyvault_access_policies.azuread_apps in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies.azuread_group in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies.logged_in_aad_app in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies.logged_in_user in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies.managed_identity in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies.object_id in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies_azuread_apps in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies
- launchpad.keyvault_access_policies_azuread_apps.azuread_apps in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies_azuread_apps.azuread_group in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies_azuread_apps.logged_in_aad_app in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies_azuread_apps.logged_in_user in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies_azuread_apps.managed_identity in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvault_access_policies_azuread_apps.object_id in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvaults in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault
- launchpad.keyvaults.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.keyvaults.initial_policy in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies
- launchpad.keyvaults.initial_policy.azuread_apps in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvaults.initial_policy.azuread_group in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvaults.initial_policy.logged_in_aad_app in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvaults.initial_policy.logged_in_user in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvaults.initial_policy.managed_identity in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.keyvaults.initial_policy.object_id in /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault_access_policies/access_policy
- launchpad.log_analytics in /home/vscode/.terraform.cache/modules/launchpad/modules/log_analytics
- launchpad.log_analytics_diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.machine_learning_workspaces in /home/vscode/.terraform.cache/modules/launchpad/modules/analytics/azure_machine_learning
- launchpad.managed_identities in /home/vscode/.terraform.cache/modules/launchpad/modules/security/managed_identity
- launchpad.mssql_databases in /home/vscode/.terraform.cache/modules/launchpad/modules/databases/mssql_database
- launchpad.mssql_servers in /home/vscode/.terraform.cache/modules/launchpad/modules/databases/mssql_server
- launchpad.mssql_servers.private_endpoint in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/private_endpoint
- launchpad.networking in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/virtual_network
- launchpad.networking.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.networking.nsg in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/virtual_network/nsg
- launchpad.networking.nsg.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.networking.special_subnets in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/virtual_network/subnet
- launchpad.networking.subnets in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/virtual_network/subnet
- launchpad.private_dns in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/private-dns
- launchpad.public_ip_addresses in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/public_ip_addresses
- launchpad.public_ip_addresses.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.recovery_vaults in /home/vscode/.terraform.cache/modules/launchpad/modules/recovery_vault
- launchpad.recovery_vaults.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.redis_caches in /home/vscode/.terraform.cache/modules/launchpad/modules/redis_cache
- launchpad.resource_groups in /home/vscode/.terraform.cache/modules/launchpad/modules/resource_group
- launchpad.route_tables in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/route_tables
- launchpad.routes in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/routes
- launchpad.storage_accounts in /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account
- launchpad.storage_accounts.container in /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/container
- launchpad.storage_accounts.container.blob in /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/blob
- launchpad.storage_accounts.data_lake_filesystem in /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/data_lake_filesystem
- launchpad.storage_accounts.private_endpoint in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/private_endpoint
- launchpad.subscriptions in /home/vscode/.terraform.cache/modules/launchpad/modules/subscriptions
- launchpad.subscriptions.diagnostics in /home/vscode/.terraform.cache/modules/launchpad/modules/diagnostics
- launchpad.synapse_workspaces in /home/vscode/.terraform.cache/modules/launchpad/modules/analytics/synapse
- launchpad.virtual_machines in /home/vscode/.terraform.cache/modules/launchpad/modules/compute/virtual_machine
- launchpad.virtual_wans in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/virtual_wan
- launchpad.virtual_wans.hubs in /home/vscode/.terraform.cache/modules/launchpad/modules/networking/virtual_wan/virtual_hub

Initializing the backend...

Initializing provider plugins...
- terraform.io/builtin/terraform is built in to Terraform
- Finding hashicorp/azuread versions matching "~> 1.0.0"...
- Finding hashicorp/random versions matching "~> 2.2.1"...
- Finding hashicorp/external versions matching "~> 1.2.0"...
- Finding hashicorp/null versions matching "~> 2.1.0"...
- Finding hashicorp/tls versions matching "~> 2.2.0"...
- Finding aztfmod/azurecaf versions matching "~> 1.1.0"...
- Finding hashicorp/azurerm versions matching "~> 2.28.0"...
- Using hashicorp/azuread v1.0.0 from the shared cache directory
- Using hashicorp/random v2.2.1 from the shared cache directory
- Using hashicorp/external v1.2.0 from the shared cache directory
- Using hashicorp/null v2.1.2 from the shared cache directory
- Using hashicorp/tls v2.2.0 from the shared cache directory
- Using aztfmod/azurecaf v1.1.1 from the shared cache directory
- Using hashicorp/azurerm v2.28.0 from the shared cache directory

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Line 198 - Terraform init return code 0
calling plan
@calling plan
running terraform plan with -var-file /tf/caf/landingzones/caf_launchpad/scenario/300/configuration.tfvars -parallelism=30
 -TF_VAR_workspace: tfstate
 -state: /home/vscode/.terraform.cache/tfstates/level0/tfstate/caf_launchpad.tfstate
 -plan:  /home/vscode/.terraform.cache/tfstates/level0/tfstate/caf_launchpad.tfplan
/tf/caf/landingzones/caf_launchpad
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.azurerm_client_config.current: Refreshing state...
module.launchpad.data.azurerm_client_config.current: Refreshing state...
module.launchpad.data.azurerm_subscription.primary: Refreshing state...

Warning: Value for undeclared variable

The root module does not declare a variable named "azuread_app_roles" but a
value was found in file
"/tf/caf/landingzones/caf_launchpad/scenario/300/configuration.tfvars". To use
this value, add a "variable" block to the configuration.

Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.

Terraform plan return code: 0
Terraform returned errors:

Error: Unsupported attribute

  on /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/private_endpoint.tf line 20, in data "terraform_remote_state" "vnets":
  20:     storage_account_name = var.tfstates[each.value.remote_tfstate.tfstate_key].storage_account_name
    |----------------
    | each.value is object with 5 attributes

This object does not have an attribute named "remote_tfstate".


Error: Unsupported attribute

  on /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/private_endpoint.tf line 21, in data "terraform_remote_state" "vnets":
  21:     container_name       = var.tfstates[each.value.remote_tfstate.tfstate_key].container_name
    |----------------
    | each.value is object with 5 attributes

This object does not have an attribute named "remote_tfstate".


Error: Unsupported attribute

  on /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/private_endpoint.tf line 22, in data "terraform_remote_state" "vnets":
  22:     resource_group_name  = var.tfstates[each.value.remote_tfstate.tfstate_key].resource_group_name
    |----------------
    | each.value is object with 5 attributes

This object does not have an attribute named "remote_tfstate".


Error: Unsupported attribute

  on /home/vscode/.terraform.cache/modules/launchpad/modules/storage_account/private_endpoint.tf line 23, in data "terraform_remote_state" "vnets":
  23:     key                  = var.tfstates[each.value.remote_tfstate.tfstate_key].key
    |----------------
    | each.value is object with 5 attributes

This object does not have an attribute named "remote_tfstate".

Error on or near line 450: Error running terraform plan; exiting with status 2000

@calling clean_up_variables
cleanup variables
clean_up backend_files

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Configuration (please complete the following information):

  • OS and version: [e.g. Windows 10 1909]
  • Version of the rover[e.g. 22]
  • Version of the landing zone[e.g. 11]

Additional context
Add any other context about the problem here.

Setting "enable_bastion = false" not working

With setting "enable_bastion = false", module "hub_network" still creates bastion host.

In "caf-terraform-landingzones/landingzones/landingzone_hub_spoke/hub_network/bastion/module.tf", input variable "enable_bastion" was ignored.

[Discussion/Thought] Multi-tenant / cross-org / partner deployments

Hi all!

Thanks for this amazing contribution, this makes it super clear on how to manage large projects as well as to define what should be in place at which levels to make it manageable for the different people / groups working on a project within an organization.

From a partner side of view, I am currently working on a "Service Offering" catalog, where I am looking to have some kind of "lego" blocks that can be taken and shaped together for deployment on a customer their subscription (think a button that a customer can click and it automatically deploys). Focusing on the following points:

  • Reusability
  • Ecosystem Lock-in (though keeping the same structure - see "levels hierarchy")
  • Automation

Now, while working with this project, it seems that certain things are made more complex than they should be (personal opinion). Giving the reason why I opened this issue. Would it be possible to share your opinion around the following?

  • How to create "service offerings" from the common blocks?
  • How to manage multiple tenants?
  • Why Rover was created?
  • Why the Azure CAF Provider was created?

Kind Regards,
Xavier Geerinck

workspace file seems to be missing.

Hi,

In order to start the workspace in a remote container we needed the workspace file which was there in the last version. This seems to be missing. Once the current version was cloned, I copied the old workspace file over.

This is part of the steps involved in "Getting Started".

[bug] restrict_supported_svc = true returns error

Describe the bug
I am trying out the caf and enabling settings incrementally. When I set restrict_supported_svc = true, the policy deployment fails.
Config File: landingzones/landingzone_caf_foundations/blueprint_foundations.sandpit.auto.tfvars

To Reproduce
Steps to reproduce the behavior:
Set: restrict_supported_svc = true
run plan or apply

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

    restrict_supported_svc = true
    list_of_supported_svc  = ["Microsoft.Network/publicIPAddresses", "Microsoft.Compute/disks"]
    msi_location           = "canadacentral"
  }
}

/*
when i enabled restrict_supported_svc:
Terraform returned errors:

Error: Invalid template interpolation value

  on blueprint_foundations_governance/policies/builtin/allowed_resource_type.tf line 16, in resource "azurerm_policy_assignment" "res_type":
  12: 
  13: 
  14: 
  15: 
  16:                 "value" : "${var.policies_matrix.list_of_supported_svc}"
  17: 
  18: 
  19: 
  20: 
    |----------------
    | var.policies_matrix.list_of_supported_svc is tuple with 2 elements

Cannot include the given value in a string template: string required.
*/

Configuration (please complete the following information):

  • OS and version: [e.g. Windows 10 1909]
  • Version of the rover[e.g. 22]
  • Version of the landing zone[e.g. 11]
              version: aztfmod/rover:2007.0108


mode                          : 'rover'
tf_action                     : 'plan'
tf_command                    : ''
landingzone                   : '/tf/caf/landingzones/landingzone_caf_foundations'
terraform command output file : '' 
level                         : 'level0'
environment                   : 'sandpit'
tfstate                       : 'landingzone_caf_foundations.tfstate'

Additional context
Add any other context about the problem here.

The fix is to replace landingzones/landingzone_caf_foundations/blueprint_foundations_governance/policies/builtin/allowed_resource_type.tf with:

#Definition ID: /providers/Microsoft.Authorization/policyDefinitions/a08ec900-254a-4555-9bf5-e42af04b5c5c
#Name: Allowed resource types

locals {
  supported_svc = "${jsonencode(var.policies_matrix.list_of_supported_svc)}"
}

resource "azurerm_policy_assignment" "res_type" {
  count                = var.policies_matrix.restrict_supported_svc ? 1 : 0
  name                 = "res_svc"
  scope                = var.scope
  policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/a08ec900-254a-4555-9bf5-e42af04b5c5c"
  description          = "Policy Assignment with Terraform"
  display_name         = "TF Restrict Deployment of specified Azure Resources"

  parameters = <<PARAMETERS
    {
      "listOfResourceTypesAllowed": {
        "value" : ${local.supported_svc}
    }
}
PARAMETERS
}

[feature] Support for Terraform 0.13

Proposal

Add support for Terraform 0.13 in two waves:

  1. Compatible: able to run on Terraform 13. possibly on Terraform 12.
  2. Optimized: leverage Terraform 13: for landing zones leverage features as count for_each depends_on.

Landing zones

  • landingzone_vdc_demo
  • landingzone_hub_spoke
  • landingzone_caf_foundations
  • landingzone_secure_vnet_dmz
  • landingzone_hub_mesh

Dependencies

Storage Blob Data Contributor is missing on the service principal created in Launchpad.

Describe the bug
During launchapad execution, we create a service principal which is then used in next landing zones. Currently i see it has Storage Blob Data Contributor missing with scope tf states resource group. If we deploy landing zone level 1 and then try to destroy it , it complains about the missing role as shown in below.

To Reproduce
rover -lz /tf/caf/landingzones/landingzone_caf_foundations -a destroy

Expected behavior
No error

Screenshots
image

Configuration

Version of the rover: aztfmod/rover:2009.0210
Version of the landing zone : v8.0.2001

Possible Solution
Solution should be to add this role to the service principal in launchpad like we do for the current user in launchpad folder in storage.tf file.

Sample resource block could be like this.

resource "azurerm_role_assignment" "ad_container_role" {
for_each = var.blobrole
scope = azurerm_resource_group.rg["tfstate"].id
role_definition_name = each.value.role_definition_name
principal_id = module.azure_applications.aad_apps[each.value.aad_app_key].azuread_service_principal.object_id
}

And in auto.tfvars config could be like.

blobrole = {
roles = {
role_definition_name = "Storage Blob Data Contributor"
aad_app_key = "caf_launchpad_level0"
}
}

Note : if you agree on this bug and proposed solution, i could merge the code with pull request.

[feature] DevOps/GitHub Actions Release Pipeline Guidance

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

There are sporadic mentions of benefits of the CAF rover in DevOps, but there is no guidance on configuration of such in Azure DevOps or in GitHub via "actions."

Describe the solution you'd like
A clear and concise description of what you want to happen.

I'm more than happy to contribute to this effort, as I believe this project has the potential to dramatically improve our ability to produce, release, maintain standards-based IaC. Perhaps I can be involved in the capacity of guinea pig, and apply instructions and guidance in DevOps implementation approach to validate the concept. I can create accompanying documentation for other developers/implementers of the project. I'll gladly produce the technical documentation for publication here in return for assistance with the DevOps pipelines.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

I've considered "hacking it together" and getting it to run in my DevOps environment. I think going through the process, and documenting the approach for the general public is a better more valuable option.

Additional context
Add any other context or screenshots about the feature request here.

Docker Volume UI asks to share full drive

Describe the bug
Unable to Create CAF by following documentation below link
https://github.com/Azure/caf-terraform-landingzones/blob/master/documentation/getting_started/getting_started.md
Able to test remote container execution by creating sample vscode-remote-try-node , which means setup like Docker , Vscode extension etc are all fine.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
When Running Remote-Container : Reopen in Container , I get error message as mentioned in screenshot below.

Screenshots
If applicable, add screenshots to help explain your problem.
image
image

Configuration (please complete the following information):

  • OS and version: Windows 10 1803
  • Version of the rover: aztfmod/rover:2009.0210
  • Version of the landing zone[e.g. 11]

Additional context
Docker > setting > resources > File Sharing is configured. ( included entire D: and also the folder where source code is kept

[bug] AKV10032: Invalid issuer

Describe the bug
After successfully deploying the launchpad and foundations the Rover is not able to 'login_as_launchpad' when running on GitHub actions.

To Reproduce
It occurs on GitHub actions, when running it locally with the same service principal it completes normally.
It is an intermittent fault, it can occur at every stage after the first launchpad and foundation are deployed.

Expected behavior
Not throwing a AKV10032: Invalid issuer error and continue.

Screenshots

  /$$$$$$   /$$$$$$  /$$$$$$$$       /$$$$$$$                                        
 /$$__  $$ /$$__  $$| $$_____/      | $$__  $$                                       
| $$  \__/| $$  \ $$| $$            | $$  \ $$  /$$$$$$  /$$    /$$/$$$$$$   /$$$$$$ 
| $$      | $$$$$$$$| $$$$$         | $$$$$$$/ /$$__  $$|  $$  /$$/$$__  $$ /$$__  $$
| $$      | $$__  $$| $$__/         | $$__  $$| $$  \ $$ \  $$/$$/ $$$$$$$$| $$  \__/
| $$    $$| $$  | $$| $$            | $$  \ $$| $$  | $$  \  $$$/| $$_____/| $$      
|  $$$$$$/| $$  | $$| $$            | $$  | $$|  $$$$$$/   \  $/ |  $$$$$$$| $$      
 \______/ |__/  |__/|__/            |__/  |__/ \______/     \_/   \_______/|__/      
                                                                                     
                                                                                                                                                           
              version: aztfmod/rover:2010.2808

 Expanding variable files: /__w/cloud-management/cloud-management/landingzones/caf_networking/scenario/100-single-region-hub/*.tfvars

mode                          : 'landingzone'
terraform command output file : ''
tf_action                     : 'apply'
command and parameters        : '-var-file /__w/cloud-management/cloud-management/landingzones/caf_networking/scenario/100-single-region-hub/configuration.tfvars -var-file /__w/cloud-management/cloud-management/landingzones/caf_networking/scenario/100-single-region-hub/network_security_group_definition.tfvars -parallelism=30'
level (current)               : 'level2'
environment                   : '343564964'
workspace                     : 'tfstate'
tfstate                       : '100-single-region-hub.tfstate'

@calling process_actions
@calling verify_azure_session
Checking existing Azure session
@calling verify_parameters
landingzone                   : '/__w/cloud-management/cloud-management/landingzones/caf_networking'
@calling_deploy
@calling get_storage_id

launchpad already installed

@calling deploy_from_remote_state
Connecting to the launchpad
@calling_get_logged_user_object_id
 Logged in rover app object_id: 01234567-1234-1234-1234-1234567890
 Logged in rover app object_id: 01234567-1234-1234-1234-1234567890
 - logged in Azure AD application:  GitHub-Actions-Non-Prod
@calling login_as_launchpad
 - keyvault_name: null

Getting launchpad coordinates:
AKV10032: Invalid issuer. Expected one of https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/, https://sts.windows.net/f8cdef31-a31e-4b4a-93e4-5f571e91255a/, https://sts.windows.net/e2d54eb5-3869-4f70-8578-dee5fc7331f4/, https://sts.windows.net/33e01921-4d64-4f8c-a055-5bdaffd5e33d/, https://sts.windows.net/975f013f-7f24-47e8-a7d3-abc4752bf346/, found https://sts.windows.net/***/.
 - subscription id: 
Error on or near line 326: Not authorized to manage landingzones. User must be member of the security group to access the launchpad and deploy a landing zone; exiting with status 102

Configuration (please complete the following information):

  • GitHub Actions
  • rover aztfmod/rover:2010.2808

Additional context
I created an issue for the rover too.

[bug] Github Codespaces not working

Describe the bug
When trying to use Visual Studio Codespaces or GitHub codespaces, unable to spin up the rover and development environment.

 COMMAND: docker exec -u root /codespaces-compose_rover_1 bash -c "git config --system --add credential.helper '/.codespaces/bin/codespaces gitCredential $*' && git config --system user.name 'me' && git config --system user.email '[email protected]' && git -C '/home/vscode/workspace/caf-terraform-landingzones-starter' config --local gpg.program '/.codespaces/bin/gh-gpgsign'"Started: 2020-11-30T08:26:06
usage: git [--version] [--help] [-c name=value]
Unknown option: -C
           [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
           <command> [<args>]
           [-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
           [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
docker process exited with exit code 129

How to problematically obtain Level0 statefile information

Hello everyone. This is amazing work you have been doing. I am building a set of blueprint leveraging your great work and I am stump with something I have not been able to figure out yet. I will be referring to the landingzone repo found here:

https://github.com/bernardmaltais/eslz-template

In my L1_blueprint_base I would like to automate the addition of users to the AAD Security Group created by the launcher as part of L0_blueprint_launchpad. For this I need to get the proper resource group id created.

How can I do this? I have started to implement some test code as part of the L1_subscription_base/code/5-L1_blueprint_base_members.tf file. Maybe you can point me to some documentation on how to do this? There are more info in the comment section at the top of the code.

Regards,

Bernard

[feature] Support for multi-subscriptions deployment model

Coming up with an onboarding model to manage multiple subscriptions and the related security considerations for the various landing zones models.

Minimum scenarios to cover:

  • add an existing subscription to the reference model
  • onboard management groups and policies.
  • select the right subscription for a particular deployment - linked to aztfmod/rover#37
  • [optional] provisioning from EA, onboarding from other channels.

[bug] How to manage state after a timed out deployment

Describe the bug
I have been experimenting with deploying the CAF foundations and modifying some of the tfvars. I enabled the security center option and ran apply.
The apply timed out. When I do a re-apply, it says the resource is there and needs to be imported.

The rover command does not expose the import command. Also, the path is so embedded that it becomes difficult to determine how to import the resource into the state.
Please advise

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Configuration (please complete the following information):

  • OS and version: [e.g. Windows 10 1909] Windows 10 2004
  • Version of the rover[e.g. 22]
  • Version of the landing zone[e.g. 11]
cleanup variables
[vscode@92a0aca62a3e caf]$ rover /tf/caf/landingzones/landingzone_caf_foundations import 

  /$$$$$$   /$$$$$$  /$$$$$$$$       /$$$$$$$                                        
 /$$__  $$ /$$__  $$| $$_____/      | $$__  $$                                       
| $$  \__/| $$  \ $$| $$            | $$  \ $$  /$$$$$$  /$$    /$$/$$$$$$   /$$$$$$ 
| $$      | $$$$$$$$| $$$$$         | $$$$$$$/ /$$__  $$|  $$  /$$/$$__  $$ /$$__  $$
| $$      | $$__  $$| $$__/         | $$__  $$| $$  \ $$ \  $$/$$/ $$$$$$$$| $$  \__/
| $$    $$| $$  | $$| $$            | $$  \ $$| $$  | $$  \  $$$/| $$_____/| $$      
|  $$$$$$/| $$  | $$| $$            | $$  | $$|  $$$$$$/   \  $/ |  $$$$$$$| $$      
 \______/ |__/  |__/|__/            |__/  |__/ \______/     \_/   \_______/|__/      
                                                                                     
                                                                                                                                                           
              version: aztfmod/rover:2007.0108


mode                          : 'rover'
tf_action                     : 'import'
tf_command                    : ''
landingzone                   : '/tf/caf/landingzones/landingzone_caf_foundations'
terraform command output file : '' 
level                         : 'level0'
environment                   : 'sandpit'
tfstate                       : 'landingzone_caf_foundations.tfstate'

Additional context
Add any other context about the problem here.

Terraform init return code 0
calling plan and apply
@calling plan
running terraform plan with 
 -TF_VAR_workspace: sandpit
 -state: /home/vscode/.terraform.cache/tfstates/sandpit/landingzone_caf_foundations.tfstate
 -plan:  /home/vscode/.terraform.cache/tfstates/sandpit/landingzone_caf_foundations.tfplan
/tf/caf/landingzones/landingzone_caf_foundations
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.terraform_remote_state.level0_launchpad: Refreshing state...
module.blueprint_foundations_accounting.azurecaf_naming_convention.rg_operations_name: Refreshing state... [id=aCis5FrLOyMcBlla]
module.blueprint_foundations_accounting.module.activity_logs.azurecaf_naming_convention.caf_name_evh: Refreshing state... [id=eRBziXs7vrFvQiaG]
module.blueprint_foundations_accounting.azurecaf_naming_convention.rg_coresec_name: Refreshing state... [id=jbHHD6wOLbfN7ljU]
module.blueprint_foundations_accounting.module.diagnostics_logging.azurecaf_naming_convention.caf_name_evh: Refreshing state... [id=wsGS3NOiVpQuPwlV]
module.blueprint_foundations_accounting.module.log_analytics.azurecaf_naming_convention.caf_name_la: Refreshing state... [id=y0vLmFJd2wFSha3U]
module.blueprint_foundations_accounting.module.diagnostics_logging.azurecaf_naming_convention.caf_name_st: Refreshing state... [id=oHU8tIppqq0RqVrw]
module.blueprint_foundations_accounting.module.activity_logs.azurecaf_naming_convention.caf_name_st: Refreshing state... [id=rBGaorhrTBWD51LD]
module.blueprint_foundations_governance.data.azurerm_client_config.current: Refreshing state...
module.blueprint_foundations_accounting.azurerm_resource_group.rg_coresec: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourceGroups/wkat-rg-hub-core-sec]
module.blueprint_foundations_security.module.security_center.azurerm_security_center_contact.contact[0]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/providers/Microsoft.Security/securityContacts/default1]
module.blueprint_foundations_security.module.security_center.azurerm_security_center_subscription_pricing.sc[0]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/providers/Microsoft.Security/pricings/default]
module.blueprint_foundations_accounting.data.azurerm_client_config.current: Refreshing state...
module.blueprint_foundations_security.data.azurerm_client_config.current: Refreshing state...
module.blueprint_foundations_accounting.azurerm_resource_group.rg_operations: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourceGroups/wkat-rg-hub-operations]
module.blueprint_foundations_governance.data.azurerm_subscription.current: Refreshing state...
module.blueprint_foundations_governance.module.management_groups.data.azurerm_client_config.current: Refreshing state...
module.blueprint_foundations_accounting.module.activity_logs.data.azurerm_subscription.current: Refreshing state...
module.blueprint_foundations_accounting.module.log_analytics.azurerm_log_analytics_workspace.log_analytics: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/microsoft.operationalinsights/workspaces/wkat-la-caflalogs]
module.blueprint_foundations_accounting.module.diagnostics_logging.azurerm_storage_account.log: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourceGroups/wkat-rg-hub-operations/providers/Microsoft.Storage/storageAccounts/wkatstdiaglogs]
module.blueprint_foundations_accounting.module.activity_logs.azurerm_storage_account.log: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourceGroups/wkat-rg-hub-core-sec/providers/Microsoft.Storage/storageAccounts/wkatstactlogs]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.pol_managed_disks_assignment[0]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/providers/Microsoft.Authorization/policyAssignments/vm_no_managed_disks]
module.blueprint_foundations_accounting.module.log_analytics.azurerm_log_analytics_solution.la_solution["ContainerInsights"]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/Microsoft.OperationsManagement/solutions/ContainerInsights(wkat-la-caflalogs)]
module.blueprint_foundations_accounting.module.log_analytics.azurerm_log_analytics_solution.la_solution["AgentHealthAssessment"]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/Microsoft.OperationsManagement/solutions/AgentHealthAssessment(wkat-la-caflalogs)]
module.blueprint_foundations_accounting.module.log_analytics.azurerm_log_analytics_solution.la_solution["ADReplication"]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/Microsoft.OperationsManagement/solutions/ADReplication(wkat-la-caflalogs)]
module.blueprint_foundations_accounting.module.log_analytics.azurerm_log_analytics_solution.la_solution["ADAssessment"]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/Microsoft.OperationsManagement/solutions/ADAssessment(wkat-la-caflalogs)]
module.blueprint_foundations_accounting.module.log_analytics.azurerm_log_analytics_solution.la_solution["KeyVaultAnalytics"]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/Microsoft.OperationsManagement/solutions/KeyVaultAnalytics(wkat-la-caflalogs)]
module.blueprint_foundations_accounting.module.log_analytics.azurerm_log_analytics_solution.la_solution["DnsAnalytics"]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/Microsoft.OperationsManagement/solutions/DnsAnalytics(wkat-la-caflalogs)]
module.blueprint_foundations_accounting.module.log_analytics.azurerm_log_analytics_solution.la_solution["NetworkMonitoring"]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/Microsoft.OperationsManagement/solutions/NetworkMonitoring(wkat-la-caflalogs)]
module.blueprint_foundations_security.module.sentinel.azurerm_log_analytics_solution.sentinel[0]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/Microsoft.OperationsManagement/solutions/SecurityInsights(wkat-la-caflalogs)]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.vm_auto_monitor[0]: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/providers/Microsoft.Authorization/policyAssignments/vm_auto_monitor]
module.blueprint_foundations_accounting.module.activity_logs.azurerm_monitor_diagnostic_setting.audit: Refreshing state... [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257|actlogs]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0] will be created
  + resource "azurerm_policy_assignment" "res_location" {
      + description          = "Policy Assignment with Terraform"
      + display_name         = "TF Restrict Deployment of Azure Resources in specific location"
      + enforcement_mode     = true
      + id                   = (known after apply)
      + name                 = "res_location"
      + parameters           = jsonencode(
            {
              + listOfAllowedLocations = {
                  + value = [
                      + "canadacentral",
                      + "canadaeast",
                    ]
                }
            }
        )
      + policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c"
      + scope                = "/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257"

      + identity {
          + principal_id = (known after apply)
          + tenant_id    = (known after apply)
          + type         = (known after apply)
        }
    }

  # module.blueprint_foundations_security.module.security_center.azurerm_security_center_workspace.sc[0] will be created
  + resource "azurerm_security_center_workspace" "sc" {
      + id           = (known after apply)
      + scope        = "/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257"
      + workspace_id = "/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/resourcegroups/wkat-rg-hub-operations/providers/microsoft.operationalinsights/workspaces/wkat-la-caflalogs"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: /home/vscode/.terraform.cache/tfstates/sandpit/landingzone_caf_foundations.tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "/home/vscode/.terraform.cache/tfstates/sandpit/landingzone_caf_foundations.tfplan"

Terraform plan return code: 0
@calling apply
running terraform apply
Acquiring state lock. This may take a few moments...
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Creating...
module.blueprint_foundations_security.module.security_center.azurerm_security_center_workspace.sc[0]: Creating...
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [10s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [20s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [30s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [40s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [50s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [1m0s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [1m10s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [1m20s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Still creating... [1m30s elapsed]
module.blueprint_foundations_governance.module.builtin_policies.azurerm_policy_assignment.res_location[0]: Creation complete after 1m31s [id=/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/providers/Microsoft.Authorization/policyAssignments/res_location]
Terraform apply return code: 0
Terraform returned errors:

Error: A resource with the ID "/subscriptions/xxxxxxxxx-7c3c-446d-8015-9ae244c26257/providers/Microsoft.Security/workspaceSettings/default" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_security_center_workspace" for more information.

  on /home/vscode/.terraform.cache/modules/blueprint_foundations_security.security_center/terraform-azurerm-caf-security-center-1.0/module.tf line 15, in resource "azurerm_security_center_workspace" "sc":
  15: resource "azurerm_security_center_workspace" "sc" {

[question] which branch is recomended?

I'm simply curious which branch is recomended for use by general public? In many projects master is not considered a production state, and one should rely on tags/releases. I also noticed some of the bugs issues are around version specific branches like 0.4.

What is the recomended model at this time?

[feature] New landing_zone for networking using hub spoke with Azure Virtual WAN

Proposal

Create a model for Azure to deploy a regional hub and spoke model using Azure Virtual WAN using architecture as described here: https://docs.microsoft.com/en-us/azure/virtual-wan/virtual-wan-about.

This landing zone should allow you to easily create a Virtual WAN (Standard SKU) environment as well as flexible structure to onboard new HUB iteratively with its associated features:

Pseudo code

Pseudo code for variable structure:

virtual_hub_config = {
  virtual_wan = {
    resource_group_name = "virtualwan-ns"
    name                = "ContosovWAN"
    dns_name            = "private.contoso.com"

    hubs = {
      hub1 = {
        hub_name           = "SEA-HUB"
        region             = "southeastasia"
        hub_address_prefix = "10.0.3.0/24"
        deploy_firewall    = true
        firewall_name      = "azfwvhub-sea"
        peerings = {     
          spoke1 = {
            hub_to_vitual_network_traffic_allowed          = true
            vitual_network_to_hub_gateways_traffic_allowed = true
            internet_security_enabled                      = false
          }
        }

        deploy_p2s = true
        p2s_config = { #config details }
        deploy_s2s = false
        s2s_config = { #config details }
        deploy_er  = false
        er_config  = { #config details }

Provisionning a new hub done with declaring another hub structure in the hubs object.

Storage Container creation failing in launchpad

Describe the bug
level0 container failing to create in launchpad landing zone. infact other containers are also failing to create in launchpad.

To Reproduce
rover -lz /tf/caf/landingzones/launchpad/ -a apply -launchpad -env Dev

Expected behavior
Successful apply and upload tfstate to storage container

Screenshots
image

Configuration (please complete the following information):
Version of the rover: aztfmod/rover:2009.0210
Version of the landing zone : v8.0.2001

Additional context
When i try to create this container with terraform outside of this landing zone environment , it works fine. Thats why i feel there is something wrong in landing zone ecosystem.

image

Failed at setting up launchpad

@LaurentLesle and @arnaudlh

I am deploying a landing zone and it failed at setting up launchpad after running:

/tf/launchpads/launchpad_opensource_light apply -var 'location=australiaeast'

Error message was:

Error: graphrbac.ApplicationsClient#Create: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="Unknown" Message="Unknown service error" Details=[{"odata.error":{"code":"Authorization_RequestDenied","date":"2020-02-20T22:35:48","message":{"lang":"en","value":"Insufficient privileges to complete the operation."},"requestId":"972d398a-c415-4069-b680-62efba4f7c74"}}] on identity.tf line 4, in resource "azuread_application" "launchpad": 4: resource "azuread_application" "launchpad" {

rover login was ok and I am the owner the subscription. Is there anything graph rbac is required to be enabled?

[feature] New landing_zone for Shared Image Gallery

Proposal

Create a landing zone that allows image gallery creation for Windows and Linux using Packer.
This landing zone would use compute resource to run packer such as (Azure VM, Azure Container Instance, etc.) and would use the underlying landing zones services such as network and pipelines technologies to automate the run of the image creations.

Support:

Pseudo code

TBD.

[bug] specialsubnets is required

Describe the bug
specialsubnets has to be specified even if empty, this is confusing when starting from scratch

To Reproduce
Steps to reproduce the behavior:
define a vnet without specialsubnets

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Configuration (please complete the following information):

  • OS and version: Windows 10 + WSL2
  • Version of the rover: aztfmod/rover:2009.0210
  • Version of the landing zone[e.g. 11]

Additional context
Add any other context about the problem here.

[EPIC] Support for Terraform Cloud - Enterprise

Already supported scenario

  • 1. Create the TFE/TFC environment (organization, variables, workspaces)
  • 1. Use TFC as remote backend (local execution, backend stored in TFC) - aztfmod/rover#101

Future scenarios:

  • Read remote state on TFC/TFE backend storage
  • Use TFC with Terraform Cloud Agents (https://www.terraform.io/docs/cloud/workspaces/agent.html)
  • Deploy TFE in customer subscription with Hashicorp TFE module/blueprints as a landing zone add-on
  • Deploy Terraform Enterprise Server and remote agents
  • Use TFE in online mode (execution in TFE with remote agents)

[bug] caf_launchpad needs new version of azurerm

Describe the bug
Launchpad landing zone is importing hashicorp/azurerm ~> 2.33.0
but the terraform-azurerm-caf module is using attributes of azurerm_key_vault that don't exist in that version of azurerm; specifically "contact"
https://registry.terraform.io/providers/hashicorp/azurerm/2.34.0/docs/resources/key_vault#email

To Reproduce

vscode@adc5a2485e6a public]$ rover -lz /tf/caf/public/landingzones/caf_launchpad   -var-folder /tf/caf/configuration/${environment}/level0/launchpad   -parallelism 30   -level level0   -env ${environment}   -launchpad   -a plan
...
Line 198 - Terraform init return code 0
calling plan
@calling plan
running terraform plan with -var-file /tf/caf/configuration/demo/level0/launchpad/configuration.tfvars -var-file /tf/caf/configuration/demo/level0/launchpad/dynamic_secrets.tfvars -var-file /tf/caf/configuration/demo/level0/launchpad/iam_role_mapping.tfvars -var-file /tf/caf/configuration/demo/level0/launchpad/keyvaults.tfvars -var-file /tf/caf/configuration/demo/level0/launchpad/storage_accounts.tfvars -parallelism 30
 -TF_VAR_workspace: tfstate
 -state: /home/vscode/.terraform.cache/tfstates/level0/tfstate/caf_launchpad.tfstate
 -plan:  /home/vscode/.terraform.cache/tfstates/level0/tfstate/caf_launchpad.tfplan
/tf/caf/public/landingzones/caf_launchpad
Terraform plan return code: 0
Terraform returned errors:

Error: Unsupported block type

  on /home/vscode/.terraform.cache/modules/launchpad/modules/security/keyvault/keyvault.tf line 50, in resource "azurerm_key_vault" "keyvault":
  50:   dynamic "contact" {

Blocks of type "contact" are not expected here.

Error on or near line 446: Error running terraform plan; exiting with status 2000

@calling clean_up_variables
cleanup variables
clean_up backend_files
[vscode@adc5a2485e6a public]$ 

Expected behavior
Should work; need to update main.tf to import new version.

Configuration (please complete the following information):

  • OS and version: [e.g. Windows 10 1909]
  • Version of the rover[e.g. 22]
  • Version of the landing zone[e.g. 11]

Additional context
Probably worth reviewing version of azurerm used in other landing zones

2007 tfstate RG is hardcoded for southeastasia

Describe the bug
The current 2007 launchpad code is positionning the tfstate resource group in southeastasia as can be seen below

tfstate     = {
    name        = "launchpad-tfstates"
    location    = "southeastasia"
    useprefix   = true
    max_length  = 40
  }

To Reproduce
Deploy the launchpad

Expected behavior
RG should be in user specified region

Configuration (please complete the following information):

  • LZ release 2007

[feature] Implement Azure Monitor - Service Health Alerts

Is your feature request related to a problem? Please describe.
Would like to add a feature enhancement of Monitoring Module in to the CAF Foundations. The Module may include the monitoring and alerts elements that may be needed in the Landing Zone.

Describe the solution you'd like

Planned inclusion:

landingzone_caf_foundations
|_ blueprint_foundations_monitoring --> New
|_ Service health Alerts

An alert/set of alerts to be triggered in the event of :

  • Service Issue (Global and Region wide)
  • Planned maintenance
  • Health Advisories
  • Security Advisories

Components to be created:

  • An action group.
  • Set of alert rules

Pseudo Code
resource "azurerm_monitor_action_group" "ag1" {
name = var.name
resource_group_name = var.rg
short_name = var.shortname
enabled = #boolean_value from variable set

dynamic "email_receiver" {
for_each = enable_email_alert == false ? [] : [1] # decider from variable set
content {
#from variable set
}
}

dynamic "sms_receiver" {
for_each = enable_sms_alert == false? [] : [1] # decider from variable set
content {
#from variable set
}
}

dynamic "webhook_receiver" {
for_each = enable_webhook_trigger == false ? [] : [1] # decider from variable set
content {
#from variable set
}
}

dynamic "arm_role_receiver" {
for_each = enable_arm_role_alert == false ? [] : [1] # decider from variable set
content {
#from variable set
}
}

}

#Due to the current limitations of 'azurerm_monitor_activity_log_alert' Resource, implementing this alert using ARM Template.

resource "azurerm_template_deployment" "alert1" {
name = "random_name"
resource_group_name = var.rg

template_body = file("${path.module}/filename.json")
parameters = {
"name" = "name"
"actionGroups_name" = azurerm_monitor_action_group.ag1.name
"region" = "${join(",", var.alert_settings.service_health_alerts.location)}" #from variable set
}
deployment_mode = "Incremental"

}

Reference
hashicorp/terraform-provider-azurerm#7392
https://docs.microsoft.com/en-us/azure/service-health/service-health-overview
https://www.terraform.io/docs/providers/azurerm/r/monitor_activity_log_alert.html

foundations_governance policies are not deployed

Describe the bug
A simple rover -lz /tf/caf/landingzones/landingzone_caf_foundations -a apply however all policies are not deployed in the subscirption.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy launchpad
  2. Deploy landingzone_caf_foundation

Expected behavior
All policies under foundations_governance.policies are deployed to Azure Policy

Configuration (please complete the following information):

  • OS and version: [e.g. Mac OS]
  • Version of the rover:2009.0210
  • Version of the landing zone: v8.0.2008

Additional context
foundations.sandbox.auto.tfvars file:

# Sample Cloud Adoption Framework foundations landing zone

## globalsettings
global_settings = {
  #specifies the set of locations you are going to use in this landing zone
  location_map = {
    australiaeast           = "australiaeast"
    australiasoutheast      = "australiasoutheast"
  }

  #naming convention to be used as defined in naming convention module, accepted values are cafclassic, cafrandom, random, passthrough
  convention = "cafrandom"

  #Set of tags for core operations
  tags_hub = {
    owner          = "CAF"
    environment    = "sandbox"
    deploymentType = "Terraform"
    costCenter     = "000000"
    businessUnit   = "ITS"
    DR             = "NON-DR-ENABLED"
  }

  # Set of resource groups to land the foundations
  resource_groups_hub = {
    australiaeast = {
      HUB-CORE-SEC = {
        name     = "sandbox-core-sec-ae"
        location = "australiaeast"
      }
      HUB-OPERATIONS = {
        name     = "sandbox-operations-ae"
        location = "australiaeast"
      }
    }
    australiasoutheast = {
      HUB-CORE-SEC = {
        name     = "sandbox-core-sec-ase"
        location = "australiasoutheast"
      }
      HUB-OPERATIONS = {
        name     = "sandbox-operations-ase"
        location = "australiasoutheast"
      }
    }
  }
}

## accounting settings
accounting_settings = {

  # Azure diagnostics logs retention period
  australiaeast = {
    # Azure Subscription activity logs retention period
    azure_activity_log_enabled    = true
    azure_activity_logs_name      = "actlogs"
    azure_activity_logs_event_hub = false
    azure_activity_logs_retention = 365
    azure_activity_audit = {
      log = [
        # ["Audit category name",  "Audit enabled)"] 
        ["Administrative", true],
        ["Security", true],
        ["ServiceHealth", true],
        ["Alert", true],
        ["Recommendation", true],
        ["Policy", true],
        ["Autoscale", true],
        ["ResourceHealth", true],
      ]
    }
    azure_diagnostics_logs_name      = "diaglogs"
    azure_diagnostics_logs_event_hub = false

    #Logging and monitoring 
    analytics_workspace_name = "caflalogs-ae"

    ##Log analytics solutions to be deployed 
    solution_plan_map = {
      NetworkMonitoring = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/NetworkMonitoring"
      },
      ADAssessment = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ADAssessment"
      },
      ADReplication = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ADReplication"
      },
      AgentHealthAssessment = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/AgentHealthAssessment"
      },
      DnsAnalytics = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/DnsAnalytics"
      },
      ContainerInsights = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ContainerInsights"
      },
      KeyVaultAnalytics = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/KeyVaultAnalytics"
      }
    }
  }
  australiasoutheast = {
    # Azure Subscription activity logs retention period
    azure_activity_log_enabled    = true
    azure_activity_logs_name      = "actlogs"
    azure_activity_logs_event_hub = false
    azure_activity_logs_retention = 365
    azure_activity_audit = {
      log = [
        # ["Audit category name",  "Audit enabled)"] 
        ["Administrative", true],
        ["Security", true],
        ["ServiceHealth", true],
        ["Alert", true],
        ["Recommendation", true],
        ["Policy", true],
        ["Autoscale", true],
        ["ResourceHealth", true],
      ]
    }
    azure_diagnostics_logs_name      = "diaglogs"
    azure_diagnostics_logs_event_hub = false

    #Logging and monitoring 
    analytics_workspace_name = "caflalogs-ase"

    ##Log analytics solutions to be deployed 
    solution_plan_map = {
      NetworkMonitoring = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/NetworkMonitoring"
      },
      ADAssessment = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ADAssessment"
      },
      ADReplication = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ADReplication"
      },
      AgentHealthAssessment = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/AgentHealthAssessment"
      },
      DnsAnalytics = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/DnsAnalytics"
      },
      ContainerInsights = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/ContainerInsights"
      },
      KeyVaultAnalytics = {
        "publisher" = "Microsoft"
        "product"   = "OMSGallery/KeyVaultAnalytics"
      }
    }
  }
}

## governance
governance_settings = {
  australiaeast = {
    #current code supports only two levels of managemenr groups and one root
    deploy_mgmt_groups = false
    management_groups = {
      root = {
        name          = "caf-rootmgmtgroup"
        subscriptions = []
        #list your subscriptions ID in this field as ["GUID1", "GUID2"]
        children = {
          child1 = {
            name          = "tree1child1"
            subscriptions = []
          }
          child2 = {
            name          = "tree1child2"
            subscriptions = []
          }
          child3 = {
            name          = "tree1child3"
            subscriptions = []
          }
        }
      }
    }

    policy_matrix = {
      # autoenroll_asc        = true # - to be implemented via builtin policies
      autoenroll_monitor_vm = true
      autoenroll_netwatcher = true

      no_public_ip_spoke     = true
      cant_create_ip_spoke   = true
      managed_disks_only     = true
      restrict_locations     = true
      list_of_allowed_locs   = ["australiaeast", "australiasoutheast"]
      restrict_supported_svc = false
      list_of_supported_svc  = ["Microsoft.Network/publicIPAddresses", "Microsoft.Compute/disks"]
      msi_location           = "australiaeast"
    }
  }
  australiasoutheast = {}
}

## security 
security_settings = {
  #Azure Security Center Configuration 
  enable_security_center = true
  security_center = {
    contact_email       = "[email protected]"
    contact_phone       = "006499236000"
    alerts_to_admins    = true
    alert_notifications = true
  }
  #Enables Azure Sentinel on the Log Analaytics repo
  enable_sentinel = true
}

[feature] Better More complete Documentation

I'm really struggling to get use of these "landingzones" outside of the very simple "getting started" example. A large chunk of this is a lack of clear documentation on how things are structured that is unique to the TF launchpad itself as well as how the rover expects to find data structured, leverages/detects state info, etc. AKA I cant read the CAF documentation for insight as this all hovers around TF/rover specific implimentation.

For example, "landingzones" apear to be terraform plans instead of modules, and its expected that you manually call a tf_var with desired settings from another directory? Is that correct? Rover seems to scan for workspaces, but I'm not sure what it does with that info. If you don't select -env prod or whatever everythign defaults into a "sandpit" container... but the significance isn't spelled out. Can you extend one launchpad to handle new levels/subscriptions so it acts as a shared tfstate repo? Should every level/subscription have its own launchpad? Theres a nice drawing explaining why there are 4 levels but how do you configure vars to build those containers, set the permissions, etc? Oh, and if I cd into a directory and run rover -lz ./ it's completely unable to figure out the landingzone name, so apprently dont do that.

Then I try to build a new "landingzone" which again ... seems liek a generic terraform plan. How do I make sure the new plan runs at the right level[0...4]? How do I ensure Rover pics the name up correctly, finds the keyvault with all the secrets, and runs under the right app context? How is this secured?

I'm not asking for all of this to be answered in this thread, more pointing out that there is a massive hole of information here that can only be solved with lots of trial and error (and error, and error). WIthin a few minutes of fumbling with rover [insert massive string of vars here] I give up and just fall back on terraform plan and move on, becuase that just works. I feel like im constantly fighting rover here .... when its supposed to be making things easier (like letting me not think about statefiles).

Describe required privileges to deploy the foundation

First and foremost thank you for the amazing work! However, I tried to deploy the foundation in my MSDN subscription using Visual Studio Code Spaces.

/tf/rover/rover.sh -lz /tf/caf/landingzones/launchpad -a apply -launchpad -var location=westus

I get the following error:

Error: graphrbac.ApplicationsClient#Create: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="Unknown" Message="Unknown service error" Details=[{"odata.error":{"code":"Authorization_RequestDenied","date":"2020-08-10T09:46:37","message":{"lang":"en","value":"Insufficient privileges to complete the operation."},"requestId":"guid"}}]

on /home/vscode/.terraform.cache/modules/azure_applications/terraform-azuread-caf-aad-apps-1.0.0/aad_application.tf line 38, in resource "azuread_application" "aad_apps":
38: resource "azuread_application" "aad_apps" {

I assume I miss the privileges to create new applications in AAD, which is fine. My request is, to add the required minimal privileges to deploy the foundation to the README. This would ease initial access to the CAF deployment for everyone.

[bug] unable to deploy networking example 201

Describe the bug
Error: Invalid for_each argument

on route_table_association.tf line 14, in resource "azurerm_subnet_route_table_association" "route_subnet":
14: for_each = transpose(local.route_tables_subnets)

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

Error on or near line 509: Error running terraform plan; exiting with status 2000

To Reproduce
rover -lz /tf/caf/landingzones/landingzone_networking -a apply -var-file /tf/caf/landingzones/landingzone_networking/examples/201-hub-spoke-vnets-firewall/configuration.tfvars

Expected behavior

Screenshots

Configuration (please complete the following information):

  • OS and version: WSL2 & Ubuntu
  • Version of the rover: aztfmod/rover:2009.0210
  • Version of the landing zone: ???

Additional context
configuration file below

resource_groups = {
  vnet_sg = {
    name       = "vnet-sg"
    location   = "westeurope"
    useprefix  = true
    max_length = 40
  }
}

vnets = {
  hub_sg = {
    resource_group_key = "vnet_sg"
    location           = "westeurope"
    vnet = {
      name          = "hub"
      address_space = ["10.10.100.0/24"]
    }
    specialsubnets = {
      AzureFirewallSubnet = {
        name = "AzureFirewallSubnet" #Must be called AzureFirewallSubnet 
        cidr = ["10.10.100.192/26"]
      }
    }
    subnets = {
      Active_Directory = {
        name     = "Active_Directory"
        cidr     = ["10.10.100.0/27"]
        nsg_name = "Active_Directory_nsg"
        nsg      = []
      }
      AzureBastionSubnet = {
        name     = "AzureBastionSubnet" #Must be called AzureBastionSubnet 
        cidr     = ["10.10.100.160/27"]
        nsg_name = "AzureBastionSubnet_nsg"
        nsg = [
          {
            name                       = "bastion-in-allow",
            priority                   = "100"
            direction                  = "Inbound"
            access                     = "Allow"
            protocol                   = "tcp"
            source_port_range          = "*"
            destination_port_range     = "443"
            source_address_prefix      = "*"
            destination_address_prefix = "*"
          },
          {
            name                       = "bastion-control-in-allow-443",
            priority                   = "120"
            direction                  = "Inbound"
            access                     = "Allow"
            protocol                   = "tcp"
            source_port_range          = "*"
            destination_port_range     = "135"
            source_address_prefix      = "GatewayManager"
            destination_address_prefix = "*"
          },
          {
            name                       = "bastion-control-in-allow-4443",
            priority                   = "121"
            direction                  = "Inbound"
            access                     = "Allow"
            protocol                   = "tcp"
            source_port_range          = "*"
            destination_port_range     = "4443"
            source_address_prefix      = "GatewayManager"
            destination_address_prefix = "*"
          },
          {
            name                       = "bastion-vnet-out-allow-22",
            priority                   = "103"
            direction                  = "Outbound"
            access                     = "Allow"
            protocol                   = "tcp"
            source_port_range          = "*"
            destination_port_range     = "22"
            source_address_prefix      = "*"
            destination_address_prefix = "VirtualNetwork"
          },
          {
            name                       = "bastion-vnet-out-allow-3389",
            priority                   = "101"
            direction                  = "Outbound"
            access                     = "Allow"
            protocol                   = "tcp"
            source_port_range          = "*"
            destination_port_range     = "3389"
            source_address_prefix      = "*"
            destination_address_prefix = "VirtualNetwork"
          },
          {
            name                       = "bastion-azure-out-allow",
            priority                   = "120"
            direction                  = "Outbound"
            access                     = "Allow"
            protocol                   = "tcp"
            source_port_range          = "*"
            destination_port_range     = "443"
            source_address_prefix      = "*"
            destination_address_prefix = "AzureCloud"
          }
        ]
      }
    }
    # Override the default var.diagnostics.vnet
    diagnostics = {
      log = [
        # ["Category name",  "Diagnostics Enabled(true/false)", "Retention Enabled(true/false)", Retention_period] 
        ["VMProtectionAlerts", true, true, 60],
      ]
      metric = [
        #["Category name",  "Diagnostics Enabled(true/false)", "Retention Enabled(true/false)", Retention_period]                 
        ["AllMetrics", true, true, 60],
      ]
    }
  }

  spoke_aks_sg = {
    resource_group_key = "vnet_sg"
    location           = "westeurope"
    vnet = {
      name          = "aks"
      address_space = ["10.10.101.0/24"]
    }
    specialsubnets = {}
    subnets = {
      aks_nodepool_system = {
        name     = "aks_nodepool_system"
        cidr     = ["10.10.101.0/27"]
        nsg_name = "aks_nodepool_system_nsg"
        nsg      = []
      }
      aks_nodepool_user1 = {
        name     = "aks_nodepool_user1"
        cidr     = ["10.10.101.32/27"]
        nsg_name = "aks_nodepool_user1_nsg"
        nsg      = []
      }
    }
  }

}

firewalls = {
  # westeurope firewall (do not change the key when created)
  westeurope = {
    location           = "westeurope"
    resource_group_key = "vnet_sg"
    vnet_key           = "hub_sg"

    # Settings for the public IP address to be used for Azure Firewall 
    # Must be standard and static for 
    firewall_ip_addr_config = {
      ip_name           = "firewall"
      allocation_method = "Static"
      sku               = "Standard" #defaults to Basic
      ip_version        = "IPv4"     #defaults to IP4, Only dynamic for IPv6, Supported arguments are IPv4 or IPv6, NOT Both
      diagnostics = {
        log = [
          #["Category name",  "Diagnostics Enabled(true/false)", "Retention Enabled(true/false)", Retention_period] 
          ["DDoSProtectionNotifications", true, true, 30],
          ["DDoSMitigationFlowLogs", true, true, 30],
          ["DDoSMitigationReports", true, true, 30],
        ]
        metric = [
          ["AllMetrics", true, true, 30],
        ]
      }
    }

    # Settings for the Azure Firewall settings
    az_fw_config = {
      name = "azfw"
      diagnostics = {
        log = [
          #["Category name",  "Diagnostics Enabled(true/false)", "Retention Enabled(true/false)", Retention_period] 
          ["AzureFirewallApplicationRule", true, true, 30],
          ["AzureFirewallNetworkRule", true, true, 30],
          ["AzureFirewallDnsProxy", true, true, 30],
        ]
        metric = [
          ["AllMetrics", true, true, 30],
        ]
      }
      rules = {
        azurerm_firewall_network_rule_collection = {
          rule1 = {
            name     = "Authorize_http_https"
            action   = "Allow"
            priority = 105
            ruleset = [
              {
                name = "Authorize_http_https"
                source_addresses = [
                  "10.0.0.0/8",
                ]
                destination_ports = [
                  "80", "443",
                ]
                destination_addresses = [
                  "*"
                ]
                protocols = [
                  "TCP",
                ]
              },
              {
                name = "Authorize_kerberos"
                source_addresses = [
                  "10.0.0.0/8",
                ]
                destination_ports = [
                  "88",
                ]
                destination_addresses = [
                  "*"
                ]
                protocols = [
                  "TCP", "UDP",
                ]
              }
            ]
          }
        }
      }
    }

  }

}

peerings = {
  hub_sg_TO_spoke_aks_sg = {
    from_key                     = "hub_sg"
    to_key                       = "spoke_aks_sg"
    name                         = "hub_sg_TO_spoke_aks_sg"
    allow_virtual_network_access = true
    allow_forwarded_traffic      = false
    allow_gateway_transit        = false
    use_remote_gateways          = false
  }

  spoke_aks_sg_TO_hub_sg = {
    from_key                     = "spoke_aks_sg"
    to_key                       = "hub_sg"
    name                         = "spoke_aks_sg_TO_hub_sg"
    allow_virtual_network_access = true
    allow_forwarded_traffic      = false
    allow_gateway_transit        = false
    use_remote_gateways          = false
  }
}

route_tables = {
  from_spoke_to_hub = {
    name               = "spoke_aks_sg_to_hub_sg"
    resource_group_key = "vnet_sg"

    vnet_keys = {
      "spoke_aks_sg" = {
        subnet_keys = ["aks_nodepool_system", "aks_nodepool_user1"]
      }
    }

    route_entries = {
      re1 = {
        name          = "defaultroute"
        prefix        = "0.0.0.0/0"
        next_hop_type = "VirtualAppliance"
        azfw = {
          VirtualAppliance_key = "westeurope"
          ipconfig_index       = 0
        }
      }
      re2 = {
        name                   = "testspecialroute"
        prefix                 = "192.168.1.1/32"
        next_hop_type          = "VirtualAppliance"
        next_hop_in_ip_address = "1.1.1.1"
      }
      re3 = {
        name          = "testspecialroute2"
        prefix        = "16.0.0.0/8"
        next_hop_type = "Internet"
      }
    }
  }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.