Giter Club home page Giter Club logo

terraforming's Introduction

Terraforming

Project Status (2021-12-11): No longer actively maintained

Terraforming is no longer actively maintained.

If you want to generate Terraform configurations from existing cloud resources, consider using other tools, such as

  • Terraformer which supports many cloud providers not only AWS but also GCP, Azure, GitHub, Kubernetes, etc., and is able to generate configurations based on the latest provider's resource schema.
  • Terracognita

Thank you for your contributions and supports in the past 6 years.


Build Status Code Climate Coverage Status Gem Version MIT License Docker Repository on Quay.io

Export existing AWS resources to Terraform style (tf, tfstate)

Supported version

  • Ruby 2.3 or higher is required
  • Terraform v0.9.3 or higher is recommended
    • Some resources (e.g. iam_instance_profile) uses newer resource specification

Installation

Add this line to your application's Gemfile:

gem 'terraforming'

And then execute:

$ bundle

Or install it yourself as:

$ gem install terraforming

Prerequisites

You need to set AWS credentials.

export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export AWS_REGION=xx-yyyy-0

You can also specify credential profile in ~/.aws/credentials by --profile option.

$ cat ~/.aws/credentials
[hoge]
aws_access_key_id = Hoge
aws_secret_access_key = FugaFuga

# Pass profile name by --profile option
$ terraforming s3 --profile hoge

You can assume a role by using the --assume option.

$ terraforming s3 --assume arn:aws:iam::123456789123:role/test-role

You can force the AWS SDK to utilize the CA certificate that is bundled with the SDK for systems where the default OpenSSL certificate is not installed (e.g. Windows) by utilizing the --use-bundled-cert option.

PS C:\> terraforming ec2 --use-bundled-cert

Usage

$ terraforming
Commands:
  terraforming alb             # ALB
  terraforming asg             # AutoScaling Group
  terraforming cwa             # CloudWatch Alarm
  terraforming dbpg            # Database Parameter Group
  terraforming dbsg            # Database Security Group
  terraforming dbsn            # Database Subnet Group
  terraforming ddb             # DynamoDB
  terraforming ec2             # EC2
  terraforming ecc             # ElastiCache Cluster
  terraforming ecsn            # ElastiCache Subnet Group
  terraforming efs             # EFS File System
  terraforming eip             # EIP
  terraforming elb             # ELB
  terraforming help [COMMAND]  # Describe available commands or one specific command
  terraforming iamg            # IAM Group
  terraforming iamgm           # IAM Group Membership
  terraforming iamgp           # IAM Group Policy
  terraforming iamip           # IAM Instance Profile
  terraforming iamp            # IAM Policy
  terraforming iampa           # IAM Policy Attachment
  terraforming iamr            # IAM Role
  terraforming iamrp           # IAM Role Policy
  terraforming iamu            # IAM User
  terraforming iamup           # IAM User Policy
  terraforming igw             # Internet Gateway
  terraforming kmsa            # KMS Key Alias
  terraforming kmsk            # KMS Key
  terraforming lc              # Launch Configuration
  terraforming nacl            # Network ACL
  terraforming nat             # NAT Gateway
  terraforming nif             # Network Interface
  terraforming r53r            # Route53 Record
  terraforming r53z            # Route53 Hosted Zone
  terraforming rds             # RDS
  terraforming rs              # Redshift
  terraforming rt              # Route Table
  terraforming rta             # Route Table Association
  terraforming s3              # S3
  terraforming sg              # Security Group
  terraforming sn              # Subnet
  terraforming snst            # SNS Topic
  terraforming snss            # SNS Subscription
  terraforming sqs             # SQS
  terraforming vgw             # VPN Gateway
  terraforming vpc             # VPC

Options:
  [--merge=MERGE]                                # tfstate file to merge
  [--overwrite], [--no-overwrite]                # Overwrite existng tfstate
  [--tfstate], [--no-tfstate]                    # Generate tfstate
  [--profile=PROFILE]                            # AWS credentials profile
  [--region=REGION]                              # AWS region
  [--use-bundled-cert], [--no-use-bundled-cert]  # Use the bundled CA certificate from AWS SDK

Export tf

$ terraforming <resource> [--profile PROFILE]

(e.g. S3 buckets):

$ terraforming s3
resource "aws_s3_bucket" "hoge" {
    bucket = "hoge"
    acl    = "private"
}

resource "aws_s3_bucket" "fuga" {
    bucket = "fuga"
    acl    = "private"
}

Export tfstate

$ terraforming <resource> --tfstate [--merge TFSTATE_PATH] [--overwrite] [--profile PROFILE]

(e.g. S3 buckets):

$ terraforming s3 --tfstate
{
  "version": 1,
  "serial": 1,
  "modules": {
    "path": [
      "root"
    ],
    "outputs": {
    },
    "resources": {
      "aws_s3_bucket.hoge": {
        "type": "aws_s3_bucket",
        "primary": {
          "id": "hoge",
          "attributes": {
            "acl": "private",
            "bucket": "hoge",
            "id": "hoge"
          }
        }
      },
      "aws_s3_bucket.fuga": {
        "type": "aws_s3_bucket",
        "primary": {
          "id": "fuga",
          "attributes": {
            "acl": "private",
            "bucket": "fuga",
            "id": "fuga"
          }
        }
      }
    }
  }
}

If you want to merge exported tfstate to existing terraform.tfstate, specify --tfstate --merge=/path/to/terraform.tfstate option. You can overwrite existing terraform.tfstate by specifying --overwrite option together.

Existing terraform.tfstate:

# /path/to/terraform.tfstate

{
  "version": 1,
  "serial": 88,
  "remote": {
    "type": "s3",
    "config": {
      "bucket": "terraforming-tfstate",
      "key": "tf"
    }
  },
  "modules": {
    "path": [
      "root"
    ],
    "outputs": {
    },
    "resources": {
      "aws_elb.hogehoge": {
        "type": "aws_elb",
        "primary": {
          "id": "hogehoge",
          "attributes": {
            "availability_zones.#": "2",
            "connection_draining": "true",
            "connection_draining_timeout": "300",
            "cross_zone_load_balancing": "true",
            "dns_name": "hoge-12345678.ap-northeast-1.elb.amazonaws.com",
            "health_check.#": "1",
            "id": "hogehoge",
            "idle_timeout": "60",
            "instances.#": "1",
            "listener.#": "1",
            "name": "hoge",
            "security_groups.#": "2",
            "source_security_group": "default",
            "subnets.#": "2"
          }
        }
      }
    }
  }
}

To generate merged tfstate:

$ terraforming s3 --tfstate --merge=/path/to/tfstate
{
  "version": 1,
  "serial": 89,
  "remote": {
    "type": "s3",
    "config": {
      "bucket": "terraforming-tfstate",
      "key": "tf"
    }
  },
  "modules": {
    "path": [
      "root"
    ],
    "outputs": {
    },
    "resources": {
      "aws_elb.hogehoge": {
        "type": "aws_elb",
        "primary": {
          "id": "hogehoge",
          "attributes": {
            "availability_zones.#": "2",
            "connection_draining": "true",
            "connection_draining_timeout": "300",
            "cross_zone_load_balancing": "true",
            "dns_name": "hoge-12345678.ap-northeast-1.elb.amazonaws.com",
            "health_check.#": "1",
            "id": "hogehoge",
            "idle_timeout": "60",
            "instances.#": "1",
            "listener.#": "1",
            "name": "hoge",
            "security_groups.#": "2",
            "source_security_group": "default",
            "subnets.#": "2"
          }
        }
      },
      "aws_s3_bucket.hoge": {
        "type": "aws_s3_bucket",
        "primary": {
          "id": "hoge",
          "attributes": {
            "acl": "private",
            "bucket": "hoge",
            "id": "hoge"
          }
        }
      },
      "aws_s3_bucket.fuga": {
        "type": "aws_s3_bucket",
        "primary": {
          "id": "fuga",
          "attributes": {
            "acl": "private",
            "bucket": "fuga",
            "id": "fuga"
          }
        }
      }
    }
  }
}

After writing exported tf and tfstate to files, execute terraform plan and check the result. There should be no diff.

$ terraform plan
No changes. Infrastructure is up-to-date. This means that Terraform
could not detect any differences between your configuration and
the real physical resources that exist. As a result, Terraform
doesn't need to do anything.

Example: Export all

Example assuming you want to export everything from us-west-2 and you are using ~/.aws/credentials with a default profile

export AWS_REGION=us-west-2
terraforming help | grep terraforming | grep -v help | awk '{print "terraforming", $2, "--profile", "default", ">", $2".tf";}' | bash
# find files that only have 1 empty line (likely nothing in AWS)
find . -type f -name '*.tf' | xargs wc -l | grep ' 1 .'

Caveats

  • terraforming kmsk does not export EXTERNAL origin key, bacause Terraform does not support it.

Run as Docker container Docker Repository on Quay.io

Terraforming Docker Image is available at quay.io/dtan4/terraforming and developed at dtan4/dockerfile-terraforming.

Pull the Docker image:

$ docker pull quay.io/dtan4/terraforming:latest

And then run Terraforming as a Docker container:

$ docker run \
    --rm \
    --name terraforming \
    -e AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX \
    -e AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \
    -e AWS_REGION=xx-yyyy-0 \
    quay.io/dtan4/terraforming:latest \
    terraforming s3

Development

After checking out the repo, run script/setup to install dependencies. Then, run script/console for an interactive prompt that will allow you to experiment.

To install this gem onto your local machine, run bundle exec rake install. To release a new version, update the version number in version.rb, and then run bundle exec rake release to create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org.

Contributing

Please read Contribution Guide at first.

  1. Fork it ( https://github.com/dtan4/terraforming/fork )
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

Similar projects

There are some similar tools to import your existing infrastructure to Terraform configuration.

License

MIT License

terraforming's People

Contributors

atomantic avatar chroju avatar ckelner avatar dtan4 avatar endemics avatar eredi93 avatar grosendorf avatar julia-stripe avatar k1low avatar knakayama avatar kovyrin avatar laxmiprasanna-gunna avatar manabusakai avatar mattgartman avatar mioi avatar mozamimy avatar nabarunchatterjee avatar ngs avatar phoolish avatar raylu-stripe avatar robatwave avatar sakazuki avatar savankumargudaas avatar seanodonnell avatar stormbeta avatar tjend avatar tmccabe07 avatar uberblah avatar woohgit avatar wsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraforming's Issues

dots in resource names warning

Warnings:

  * aws_iam_user.david.raistrick: david.raistrick: resource name can only contain letters, numbers, dashes, and underscores.
This will be an error in Terraform 0.4

iamu created:

resource "aws_iam_user" "david.raistrick" {
    name = "david.raistrick"
    path = "/"
}

"aws_iam_user.david.raistrick": {
     "type": "aws_iam_user",
          "primary": {
               "id": "david.raistrick",
               "attributes": {
....

The userid is allowed to have dots, but not the logical name - can we munge these when terraforming creates them in some fashion?

For someone else who runs into this - I just dropped the dot from the .tf and the .tfstate entries, and terraform is happy now.

Route table support is incomplete

Hey guys,

Since 0.5.0 terraforming seems to support route_table resources. But there is an issue with the level of support implemented. It only works correctly for route tables w/o any routes. The reason is that current implementation does not generate routes in tfstate, causing terraform to try to change the routes in AWS route tables while they are perfectly correct.

I believe this is an issue and terraforming should actually generate route blocks in tfstate.

ec2 command imports root block device as ebs_block_device, forcing a new resource

terraforming ec2 imports what should be a root_block_device as an ebs_block_device, which causes terraform plans to force new ec2 resources.

Steps to reproduce:

  1. Create an EBS-optimized EC2 instance with a root device type of ebs and some size, say 5 gB
  2. Run terraforming ec2
  3. Observe the following incorrect output inside the aws_instance configuration block:
ebs_block_device {
    device_name = "/dev/sda1"
}

tfstate is also incorrect:

    ebs_block_device.#:                                "0" => "1"
    ebs_block_device.[removed].device_name:           "" => "/dev/sda1" (forces new resource)
    ebs_block_device.[removed].volume_size:           "" => "<computed>" (forces new resource)
    ebs_block_device.[removed].volume_type:           "" => "<computed>" (forces new resource)
    [ various other settings that force a new resource]

The output should be:

root_block_device {
    volume_size = 5
}

tfstate output from terraform plan is now correct:

    root_block_device.#:                       "1" => "1"
    [ various other settings that do not force a new resource ] 

terraform version: 0.6.0

terraforming version: 0.1.0

Error with the Gem?

Just got linked to this on twitter, ran gem install terraforming

and when running the command terraforming

I get this error:

/opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.0.1/lib/terraforming/util.rb:1:in `<top (required)>': uninitialized constant Terraforming (NameError)
    from /opt/rubies/2.0.0-p451/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
    from /opt/rubies/2.0.0-p451/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.0.1/lib/terraforming.rb:9:in `<top (required)>'
    from /opt/rubies/2.0.0-p451/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:73:in `require'
    from /opt/rubies/2.0.0-p451/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:73:in `require'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.0.1/bin/terraforming:3:in `<top (required)>'
    from /opt/boxen/rbenv/versions/2.0.0-p451/bin/terraforming:23:in `load'
    from /opt/boxen/rbenv/versions/2.0.0-p451/bin/terraforming:23:in `<main>'

Am I missing something?

is the docker container up-to-date?

i'm getting the following error trying to run the igw command via the docker container: Could not find command "igw".

is the container up-to-date w/ the latest version of terraforming?

thanks (and great tool!)

'terraforming ec2' throws error

Hello :) I love the project!! I plan on contributing when I can find time (particularly supporting the AWS autoscaling groups).

Anyways, for most of my regions this works great. For my most populated region, I see this:

$ terraforming ec2
/usr/local/lib/ruby/gems/2.2.0/gems/aws-sdk-core-2.1.10/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call': The request must contain the parameter volumes (Aws::EC2::Errors::MissingParameter)
    from /usr/local/lib/ruby/gems/2.2.0/gems/aws-sdk-core-2.1.10/lib/aws-sdk-core/plugins/param_converter.rb:21:in `call'
    from /usr/local/lib/ruby/gems/2.2.0/gems/aws-sdk-core-2.1.10/lib/aws-sdk-core/plugins/response_paging.rb:26:in `call'
    from /usr/local/lib/ruby/gems/2.2.0/gems/aws-sdk-core-2.1.10/lib/seahorse/client/plugins/response_target.rb:21:in `call'
    from /usr/local/lib/ruby/gems/2.2.0/gems/aws-sdk-core-2.1.10/lib/seahorse/client/request.rb:70:in `send_request'
    from /usr/local/lib/ruby/gems/2.2.0/gems/aws-sdk-core-2.1.10/lib/seahorse/client/base.rb:207:in `block (2 levels) in define_operation_methods'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.2/lib/terraforming/resource/ec2.rb:73:in `block_devices_of'
    from (erb):18:in `block in apply_template'
    from (erb):1:in `each'
    from (erb):1:in `apply_template'
    from /usr/local/Cellar/ruby/2.2.2/lib/ruby/2.2.0/erb.rb:863:in `eval'
    from /usr/local/Cellar/ruby/2.2.2/lib/ruby/2.2.0/erb.rb:863:in `result'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.2/lib/terraforming/util.rb:4:in `apply_template'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.2/lib/terraforming/resource/ec2.rb:19:in `tf'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.2/lib/terraforming/resource/ec2.rb:7:in `tf'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.2/lib/terraforming/cli.rb:132:in `execute'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.2/lib/terraforming/cli.rb:23:in `ec2'
    from /usr/local/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
    from /usr/local/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'
    from /usr/local/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'
    from /usr/local/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.2/bin/terraforming:5:in `<top (required)>'
    from /usr/local/bin/terraforming:23:in `load'
    from /usr/local/bin/terraforming:23:in `<main>'

I'm looking into this, and believe it has something to do with an attached EBS volume. Let me know if you have any thoughts!

Thanks!

empty/default sgdb doesn't pass terraform plan without errors

dbsg%% terraform plan

There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * aws_db_security_group.default: "ingress": required field is not set

dbsg.tf:

resource "aws_db_security_group" "default" {
    name        = "default"
    description = "default"

}

dbsg.tfstate:

{
  "version": 1,
  "serial": 1,
  "modules": [
    {
      "path": [
        "root"
      ],
      "outputs": {
      },
      "resources": {
        "aws_db_security_group.default": {
          "type": "aws_db_security_group",
          "primary": {
            "id": "default",
            "attributes": {
              "db_subnet_group_name": "default",
              "id": "default",
              "ingress.#": "0",
              "name": "default"
            }
          }
        }
      }
    }
  ]
}

Does not properly page for iam users

Looks like API paging is broken for the IAM users functionality. This may be a bug in the ruby aws library, but have not investigated it further, just wanted to get it logged. This may also effect other calls to AWS but only my users list is large enough to need paging.

Route 53 Zone

When running the following command:

terraforming r53z --tfstate

I get:

/opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/route53_zone.rb:60:in `name_servers_of': undefined method `name_servers' for nil:NilClass (NoMethodError)
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/route53_zone.rb:29:in `block in tfstate'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/route53_zone.rb:23:in `each'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/route53_zone.rb:23:in `inject'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/route53_zone.rb:23:in `tfstate'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/route53_zone.rb:11:in `tfstate'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/cli.rb:140:in `tfstate'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/cli.rb:129:in `execute'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/cli.rb:98:in `r53z'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/bin/terraforming:5:in `<top (required)>'
    from /opt/boxen/rbenv/versions/2.0.0-p451/bin/terraforming:23:in `load'
    from /opt/boxen/rbenv/versions/2.0.0-p451/bin/terraforming:23:in `<main>'

I have 2 hosted zones with the same name, 1 is a private hosted zone and 1 is a public hosted zone. Could this be the issue?

Support AutoScaling

It would be nice to have a way to dump an entire AutoScaling group and its attached configuration (LaunchConfiguration, ELB, etc).

When generating tfstate, modules should contain a list of hashes, not a hash

While trying to do a terraform plan on a generated tfstate for security groups, terraform exited with:

Error reading local state: Decoding state file failed: json: cannot unmarshal object into Go value of type []*terraform.ModuleState

(terraform 0.5.2)

In terraform/state.go:42, we have indeed:

    Modules []*ModuleState `json:"modules"`

So generate_tfstate should have "modules" => [ ... ] instead of "modules" => { ... }

S3 bug

I am attempting to try and collect all the S3 buckets I have in my infrastructure:

terraforming s3
/opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/aws-sdk-core-2.1.3/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call': The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Aws::S3::Errors::PermanentRedirect)
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/aws-sdk-core-2.1.3/lib/aws-sdk-core/plugins/s3_sse_cpk.rb:18:in `call'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/aws-sdk-core-2.1.3/lib/aws-sdk-core/plugins/param_converter.rb:21:in `call'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/aws-sdk-core-2.1.3/lib/seahorse/client/plugins/response_target.rb:18:in `call'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/aws-sdk-core-2.1.3/lib/seahorse/client/request.rb:70:in `send_request'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/aws-sdk-core-2.1.3/lib/seahorse/client/base.rb:207:in `block (2 levels) in define_operation_methods'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/s3.rb:46:in `bucket_policy_of'
    from (erb):5:in `block in apply_template'
    from (erb):1:in `each'
    from (erb):1:in `apply_template'
    from /opt/rubies/2.0.0-p451/lib/ruby/2.0.0/erb.rb:849:in `eval'
    from /opt/rubies/2.0.0-p451/lib/ruby/2.0.0/erb.rb:849:in `result'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/util.rb:4:in `apply_template'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/s3.rb:19:in `tf'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/resource/s3.rb:7:in `tf'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/cli.rb:134:in `tf'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/cli.rb:129:in `execute'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/lib/terraforming/cli.rb:108:in `s3'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
    from /opt/rubies/2.0.0-p451/lib/ruby/gems/2.0.0/gems/terraforming-0.1.6/bin/terraforming:5:in `<top (required)>'
    from /opt/boxen/rbenv/versions/2.0.0-p451/bin/terraforming:23:in `load'
    from /opt/boxen/rbenv/versions/2.0.0-p451/bin/terraforming:23:in `<main>'

I think it is trying to collect us-standard buckets even though my credentials point to eu-west-1

Is there something I am missing here?

Paul

Exporting route table associations could cause terraforming to crash

When there is an implicit route table association, AWS API returns an Aws::EC2::Types::RouteTableAssociation record, but it does not have a subnet_id value. ERB template for generating terraform files has logic to skip those records, but tfstate formatting code does not. This allows those records to get merged into the terraform state file and terraform starts to crash while processing those files.

Not all records returned for route53 (and possibly others)

I realized while using terraforming that I was only getting results back for my first 100 records. I'm not a ruby dev, but after poking around the code, it looks like paging through records where there are more than 100 is not implemented.

Looking at http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/Route53/Client.html#list_resource_record_sets-instance_method

and

http://docs.aws.amazon.com/Route53/latest/APIReference/API_ListResourceRecordSets.html

It would appear only the first 100 records are returned by default. I'm not a Ruby developer, but I cant see any sign of looping to get the next batch of records in route53_record.rb

Consider supporting multiple AWS profiles via `--profile`

Working with multiple AWS accounts, I have the AWS CLI configured with different profiles and switch amongst them by changing the AWS_DEFAULT_PROFILE environment variable. It would be good if terraforming respected this setting (which encompasses the keys and the region) in addition to the environment variables.

Add feature to merge tfstate to existing terraform.tfstate

WHY

It is troublesome to merge generated tfstate manually.

WHAT

Make it enable to merge generated tfstate to existing file automatically.
Specifially, add new CLI option --merge FILE.

$ terraforming elb --tfstate --merge terraform.tfstate

Support for Azure, DO, ...

Hi,
Love the concept!
Any plans to support other providers already supported by terraform?
I am thinking Azure and DO.
I would probably take some changes to the syntax as well terraforming <provider> <command>.
Thx!

aws_route53_record generation doesn't handle multiple record types

If you have domain.com configured with A, MX, TXT, etc. records, terraforming will only generate a single entry in the .tfstate output. Also, when generating the terraform resource definitions, duplicate resource "aws_route53_record" "domain.com" definitions will be generated. I'm not sure what the best solution is without breaking backwards compatibility. Otherwise, great project - thanks!

ec2 command imports non-default-VPC security groups using the wrong directive, forcing a new resource

Per the Terraform instance documentation:

security_groups - (Optional) A list of security group names to associate with. If you are within a non-default VPC, you'll need to use vpc_security_group_ids instead.

terraforming ec2 will import EC2 instances created in a non-default VPC and put their security groups in a security_groups directive. However, as explained above, these security groups should be in a vpc_security_group_ids directive. Switching to the correct directive prevents terraform from forcing a new ec2 resource.

terraform version: 0.6.0

terraforming version: 0.1.0

Allow filtering of resources

Thanks so much for writing this, it looks like it's gonna be awesome for migrating existing resources into Terraform!

However, I would prefer to migrate in smaller chunks, so I think it would be awesome if we could specify some basic parameter matching on the command line so that we only generate files for specific resources.

If you're interested in a pull request, I might give this a shot myself.

Support multi region

I have AWS resources across 2 region.
However Currently terraforming outputs resources for only single region.
So please support multiple region

unexpected failure to collect elasticache clusters

I just got terraforming working but it dies when applying template to redis elasticache resources. S3 works fine, ec2 works fine.

$ terraforming ecc
(erb):9:in `block in apply_template': undefined method `port' for nil:NilClass (NoMethodError)
    from (erb):1:in `each'
    from (erb):1:in `apply_template'
    from /usr/local/Cellar/ruby/2.2.2/lib/ruby/2.2.0/erb.rb:863:in `eval'
    from /usr/local/Cellar/ruby/2.2.2/lib/ruby/2.2.0/erb.rb:863:in `result'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.3/lib/terraforming/util.rb:4:in `apply_template'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.3/lib/terraforming/resource/elasti_cache_cluster.rb:19:in `tf'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.3/lib/terraforming/resource/elasti_cache_cluster.rb:7:in `tf'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.3/lib/terraforming/cli.rb:132:in `execute'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.3/lib/terraforming/cli.rb:28:in `ecc'
    from /usr/local/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
    from /usr/local/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'
    from /usr/local/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'
    from /usr/local/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
    from /usr/local/lib/ruby/gems/2.2.0/gems/terraforming-0.1.3/bin/terraforming:5:in `<top (required)>'
    from /usr/local/bin/terraforming:23:in `load'
    from /usr/local/bin/terraforming:23:in `<main>'

While Memcached clusters have a configuration endpoint, Redis clusters do not. Looks like the template needs to be make the config endpoint properties conditional but I don't know ERB to submit pull.

Hygiene

Just filing this as a starting place for exploration of what could be done to make the output directly usable as input to Terraform or at least get a few obvious things out of the way.

Ec2 updates error

i'm retrieving the tf and tfstate files for ec2 using terraforming only but when i execute terraform plan it shows me various updates instead of no changes

[RFC] resources tagged by cloudformation

Here's my problem: I'm trying to dump security groups from an environment containing resources managed by CloudFormation.

Cloudformation tags resources it manages such as SecurityGroups with tags following the pattern aws:cloudformation:*. However this pattern is not supported by Hashicorp's hcl lexer as ':' is not in the list of supported tokens in https://github.com/hashicorp/hcl/blob/master/hcl/lex.go#L199

At this point, I see 3 alternatives:

  • try to get hashicorp to support ':' (very unlikely as it will probably have a lot of side effects for them)
  • quote the tags with something like:
--- a/lib/terraforming/template/tf/security_group.erb
+++ b/lib/terraforming/template/tf/security_group.erb
@@ -36,7 +36,7 @@ resource "aws_security_group" "<%= module_name_of(security_group) %>" {
 <% if security_group.tags.length > 0 -%>
     tags {
 <% security_group.tags.each do |tag| -%>
-        <%= tag.key %> = "<%= tag.value %>"
+        "<%= tag.key %>" = "<%= tag.value %>"
 <% end -%>
     }
 <% end -%>
  • completely ignore (not dump) resources managed by CloudFormation when we can detect them.

ELB tfstate output doesn't properly include health checks, subnets, security groups, or listeners

ELB tfstate output doesn't properly include some types. In the following function, healthcheck.# is hardcoded to 1, and listeners/security groups/subnets don't include the references to the actual resources, just the .# attributes. This causes subsequent terraform plan calls to attempt to recreate the resources.

      def tfstate(tfstate_base)
        resources = load_balancers.inject({}) do |result, load_balancer|
          load_balancer_attributes = load_balancer_attributes_of(load_balancer)
          attributes = {
            "availability_zones.#" => load_balancer.availability_zones.length.to_s,
            "connection_draining" => load_balancer_attributes.connection_draining.enabled.to_s,
            "connection_draining_timeout" => load_balancer_attributes.connection_draining.timeout.to_s,
            "cross_zone_load_balancing" => load_balancer_attributes.cross_zone_load_balancing.enabled.to_s,
            "dns_name" => load_balancer.dns_name,
            "health_check.#" => "1",
            "id" => load_balancer.load_balancer_name,
            "idle_timeout" => load_balancer_attributes.connection_settings.idle_timeout.to_s,
            "instances.#" => load_balancer.instances.length.to_s,
            "listener.#" => load_balancer.listener_descriptions.length.to_s,
            "name" => load_balancer.load_balancer_name,
            "security_groups.#" => load_balancer.security_groups.length.to_s,
            "source_security_group" => load_balancer.source_security_group.group_name,
            "subnets.#" => load_balancer.subnets.length.to_s,
          }
          result["aws_elb.#{module_name_of(load_balancer)}"] = {
            "type" => "aws_elb",
            "primary" => {
              "id" => load_balancer.load_balancer_name,
              "attributes" => attributes
            }
          }

          result
        end

Consider supporting s3 bucket policy

Terraforming already support IAM policy, it would be really useful to support s3 bucket policy, probably included inside the generation of the aws_s3_bucket

Self referencing security groups that contain CIDR blocks create duplicate resources in .tf output

Security group rules that contains both a self reference and one or more CIDR block rules create multiple .tf egress blocks, one for self = true and one containing the CIDR blocks. This doesn't match the tfstate output, and causes subsequent terraform plan operations in terraform 0.6.0 to show a diff.

Example wall of text:

sg.tf (partial)

resource "aws_security_group" "sg-fd249099-graphite_sg" {
    name        = "graphite_sg"
    description = "New combined graphite sg"
    vpc_id      = "vpc-1033c37e"

    ingress {
        from_port       = 9300
        to_port         = 9300
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 8000
        to_port         = 8000
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 22
        to_port         = 22
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 2013
        to_port         = 2014
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 2023
        to_port         = 2024
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 2003
        to_port         = 2004
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 8080
        to_port         = 8080
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 3306
        to_port         = 3306
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx"]
    }

    ingress {
        from_port       = 80
        to_port         = 80
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 8080
        to_port         = 8080
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
    }

    ingress {
        from_port       = 2513
        to_port         = 2514
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 9300
        to_port         = 9300
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx"]
    }

    ingress {
        from_port       = 3306
        to_port         = 3306
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 22
        to_port         = 22
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
    }

    ingress {
        from_port       = 2013
        to_port         = 2014
        protocol        = "tcp"
        security_groups = ["sg-5dba2338"]
        self            = false
    }

    ingress {
        from_port       = 80
        to_port         = 80
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
    }

    ingress {
        from_port       = 2213
        to_port         = 2214
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
    }

    ingress {
        from_port       = 11211
        to_port         = 11211
        protocol        = "tcp"
        self            = true
    }

    ingress {
        from_port       = 8000
        to_port         = 8000
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
    }


    egress {
        from_port       = 9300
        to_port         = 9300
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 8000
        to_port         = 8000
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 22
        to_port         = 22
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 2013
        to_port         = 2014
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 2023
        to_port         = 2024
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 2003
        to_port         = 2004
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 8080
        to_port         = 8080
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 3306
        to_port         = 3306
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx"]
    }

    egress {
        from_port       = 80
        to_port         = 80
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 8080
        to_port         = 8080
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
    }

    egress {
        from_port       = 2513
        to_port         = 2514
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 9300
        to_port         = 9300
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx"]
    }

    egress {
        from_port       = 2213
        to_port         = 2214
        protocol        = "tcp"
        cidr_blocks     = ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
    }

    egress {
        from_port       = 443
        to_port         = 443
        protocol        = "tcp"
        self            = true
    }

    egress {
        from_port       = 11211
        to_port         = 11211
        protocol        = "tcp"
        self            = true
    }

}

terraform.tfstate (partial)

                "aws_security_group.sg-fd249099-graphite_sg": {
                    "type": "aws_security_group",
                    "primary": {
                        "id": "sg-fd249099",
                        "attributes": {
                            "description": "New combined graphite sg",
                            "egress.#": "13",
                            "egress.1163740523.cidr_blocks.#": "1",
                            "egress.1163740523.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "egress.1163740523.from_port": "3306",
                            "egress.1163740523.protocol": "tcp",
                            "egress.1163740523.security_groups.#": "0",
                            "egress.1163740523.self": "false",
                            "egress.1163740523.to_port": "3306",
                            "egress.1455892873.cidr_blocks.#": "0",
                            "egress.1455892873.from_port": "8000",
                            "egress.1455892873.protocol": "tcp",
                            "egress.1455892873.security_groups.#": "0",
                            "egress.1455892873.self": "true",
                            "egress.1455892873.to_port": "8000",
                            "egress.1492328782.cidr_blocks.#": "0",
                            "egress.1492328782.from_port": "2023",
                            "egress.1492328782.protocol": "tcp",
                            "egress.1492328782.security_groups.#": "0",
                            "egress.1492328782.self": "true",
                            "egress.1492328782.to_port": "2024",
                            "egress.1507188325.cidr_blocks.#": "0",
                            "egress.1507188325.from_port": "22",
                            "egress.1507188325.protocol": "tcp",
                            "egress.1507188325.security_groups.#": "0",
                            "egress.1507188325.self": "true",
                            "egress.1507188325.to_port": "22",
                            "egress.1626615574.cidr_blocks.#": "0",
                            "egress.1626615574.from_port": "2003",
                            "egress.1626615574.protocol": "tcp",
                            "egress.1626615574.security_groups.#": "0",
                            "egress.1626615574.self": "true",
                            "egress.1626615574.to_port": "2004",
                            "egress.1844393598.cidr_blocks.#": "0",
                            "egress.1844393598.from_port": "80",
                            "egress.1844393598.protocol": "tcp",
                            "egress.1844393598.security_groups.#": "0",
                            "egress.1844393598.self": "true",
                            "egress.1844393598.to_port": "80",
                            "egress.2096605242.cidr_blocks.#": "0",
                            "egress.2096605242.from_port": "2013",
                            "egress.2096605242.protocol": "tcp",
                            "egress.2096605242.security_groups.#": "0",
                            "egress.2096605242.self": "true",
                            "egress.2096605242.to_port": "2014",
                            "egress.2119869617.cidr_blocks.#": "0",
                            "egress.2119869617.from_port": "11211",
                            "egress.2119869617.protocol": "tcp",
                            "egress.2119869617.security_groups.#": "0",
                            "egress.2119869617.self": "true",
                            "egress.2119869617.to_port": "11211",
                            "egress.2232337395.cidr_blocks.#": "1",
                            "egress.2232337395.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "egress.2232337395.from_port": "9300",
                            "egress.2232337395.protocol": "tcp",
                            "egress.2232337395.security_groups.#": "0",
                            "egress.2232337395.self": "true",
                            "egress.2232337395.to_port": "9300",
                            "egress.2233629390.cidr_blocks.#": "0",
                            "egress.2233629390.from_port": "2513",
                            "egress.2233629390.protocol": "tcp",
                            "egress.2233629390.security_groups.#": "0",
                            "egress.2233629390.self": "true",
                            "egress.2233629390.to_port": "2514",
                            "egress.3167104115.cidr_blocks.#": "0",
                            "egress.3167104115.from_port": "443",
                            "egress.3167104115.protocol": "tcp",
                            "egress.3167104115.security_groups.#": "0",
                            "egress.3167104115.self": "true",
                            "egress.3167104115.to_port": "443",
                            "egress.3199703634.cidr_blocks.#": "2",
                            "egress.3199703634.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "egress.3199703634.cidr_blocks.1": "xxx.xxx.xxx.xxx",
                            "egress.3199703634.from_port": "2213",
                            "egress.3199703634.protocol": "tcp",
                            "egress.3199703634.security_groups.#": "0",
                            "egress.3199703634.self": "false",
                            "egress.3199703634.to_port": "2214",
                            "egress.3495437599.cidr_blocks.#": "2",
                            "egress.3495437599.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "egress.3495437599.cidr_blocks.1": "xxx.xxx.xxx.xxx",
                            "egress.3495437599.from_port": "8080",
                            "egress.3495437599.protocol": "tcp",
                            "egress.3495437599.security_groups.#": "0",
                            "egress.3495437599.self": "true",
                            "egress.3495437599.to_port": "8080",
                            "id": "sg-fd249099",
                            "ingress.#": "19",
                            "ingress.1265039627.cidr_blocks.#": "0",
                            "ingress.1265039627.from_port": "9300",
                            "ingress.1265039627.protocol": "tcp",
                            "ingress.1265039627.security_groups.#": "0",
                            "ingress.1265039627.self": "true",
                            "ingress.1265039627.to_port": "9300",
                            "ingress.1455892873.cidr_blocks.#": "0",
                            "ingress.1455892873.from_port": "8000",
                            "ingress.1455892873.protocol": "tcp",
                            "ingress.1455892873.security_groups.#": "0",
                            "ingress.1455892873.self": "true",
                            "ingress.1455892873.to_port": "8000",
                            "ingress.1492328782.cidr_blocks.#": "0",
                            "ingress.1492328782.from_port": "2023",
                            "ingress.1492328782.protocol": "tcp",
                            "ingress.1492328782.security_groups.#": "0",
                            "ingress.1492328782.self": "true",
                            "ingress.1492328782.to_port": "2024",
                            "ingress.1507188325.cidr_blocks.#": "0",
                            "ingress.1507188325.from_port": "22",
                            "ingress.1507188325.protocol": "tcp",
                            "ingress.1507188325.security_groups.#": "0",
                            "ingress.1507188325.self": "true",
                            "ingress.1507188325.to_port": "22",
                            "ingress.1600166723.cidr_blocks.#": "0",
                            "ingress.1600166723.from_port": "3306",
                            "ingress.1600166723.protocol": "tcp",
                            "ingress.1600166723.security_groups.#": "0",
                            "ingress.1600166723.self": "true",
                            "ingress.1600166723.to_port": "3306",
                            "ingress.1626615574.cidr_blocks.#": "0",
                            "ingress.1626615574.from_port": "2003",
                            "ingress.1626615574.protocol": "tcp",
                            "ingress.1626615574.security_groups.#": "0",
                            "ingress.1626615574.self": "true",
                            "ingress.1626615574.to_port": "2004",
                            "ingress.1844393598.cidr_blocks.#": "0",
                            "ingress.1844393598.from_port": "80",
                            "ingress.1844393598.protocol": "tcp",
                            "ingress.1844393598.security_groups.#": "0",
                            "ingress.1844393598.self": "true",
                            "ingress.1844393598.to_port": "80",
                            "ingress.2096605242.cidr_blocks.#": "0",
                            "ingress.2096605242.from_port": "2013",
                            "ingress.2096605242.protocol": "tcp",
                            "ingress.2096605242.security_groups.#": "0",
                            "ingress.2096605242.self": "true",
                            "ingress.2096605242.to_port": "2014",
                            "ingress.2119869617.cidr_blocks.#": "0",
                            "ingress.2119869617.from_port": "11211",
                            "ingress.2119869617.protocol": "tcp",
                            "ingress.2119869617.security_groups.#": "0",
                            "ingress.2119869617.self": "true",
                            "ingress.2119869617.to_port": "11211",
                            "ingress.2233629390.cidr_blocks.#": "0",
                            "ingress.2233629390.from_port": "2513",
                            "ingress.2233629390.protocol": "tcp",
                            "ingress.2233629390.security_groups.#": "0",
                            "ingress.2233629390.self": "true",
                            "ingress.2233629390.to_port": "2514",
                            "ingress.2263064985.cidr_blocks.#": "0",
                            "ingress.2263064985.from_port": "2013",
                            "ingress.2263064985.protocol": "tcp",
                            "ingress.2263064985.security_groups.#": "1",
                            "ingress.2263064985.security_groups.3206450413": "sg-5dba2338",
                            "ingress.2263064985.self": "false",
                            "ingress.2263064985.to_port": "2014",
                            "ingress.2272744420.cidr_blocks.#": "1",
                            "ingress.2272744420.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "ingress.2272744420.from_port": "3306",
                            "ingress.2272744420.protocol": "tcp",
                            "ingress.2272744420.security_groups.#": "0",
                            "ingress.2272744420.self": "false",
                            "ingress.2272744420.to_port": "3306",
                            "ingress.3067810025.cidr_blocks.#": "0",
                            "ingress.3067810025.from_port": "8080",
                            "ingress.3067810025.protocol": "tcp",
                            "ingress.3067810025.security_groups.#": "0",
                            "ingress.3067810025.self": "true",
                            "ingress.3067810025.to_port": "8080",
                            "ingress.3126512125.cidr_blocks.#": "2",
                            "ingress.3126512125.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "ingress.3126512125.cidr_blocks.1": "xxx.xxx.xxx.xxx",
                            "ingress.3126512125.from_port": "22",
                            "ingress.3126512125.protocol": "tcp",
                            "ingress.3126512125.security_groups.#": "0",
                            "ingress.3126512125.self": "false",
                            "ingress.3126512125.to_port": "22",
                            "ingress.3199703634.cidr_blocks.#": "2",
                            "ingress.3199703634.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "ingress.3199703634.cidr_blocks.1": "xxx.xxx.xxx.xxx",
                            "ingress.3199703634.from_port": "2213",
                            "ingress.3199703634.protocol": "tcp",
                            "ingress.3199703634.security_groups.#": "0",
                            "ingress.3199703634.self": "false",
                            "ingress.3199703634.to_port": "2214",
                            "ingress.3743725295.cidr_blocks.#": "1",
                            "ingress.3743725295.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "ingress.3743725295.from_port": "9300",
                            "ingress.3743725295.protocol": "tcp",
                            "ingress.3743725295.security_groups.#": "0",
                            "ingress.3743725295.self": "false",
                            "ingress.3743725295.to_port": "9300",
                            "ingress.544931885.cidr_blocks.#": "2",
                            "ingress.544931885.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "ingress.544931885.cidr_blocks.1": "xxx.xxx.xxx.xxx",
                            "ingress.544931885.from_port": "8000",
                            "ingress.544931885.protocol": "tcp",
                            "ingress.544931885.security_groups.#": "0",
                            "ingress.544931885.self": "false",
                            "ingress.544931885.to_port": "8000",
                            "ingress.648642388.cidr_blocks.#": "4",
                            "ingress.648642388.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "ingress.648642388.cidr_blocks.1": "xxx.xxx.xxx.xxx",
                            "ingress.648642388.cidr_blocks.2": "xxx.xxx.xxx.xxx",
                            "ingress.648642388.cidr_blocks.3": "xxx.xxx.xxx.xxx",
                            "ingress.648642388.from_port": "8080",
                            "ingress.648642388.protocol": "tcp",
                            "ingress.648642388.security_groups.#": "0",
                            "ingress.648642388.self": "false",
                            "ingress.648642388.to_port": "8080",
                            "ingress.795105269.cidr_blocks.#": "2",
                            "ingress.795105269.cidr_blocks.0": "xxx.xxx.xxx.xxx",
                            "ingress.795105269.cidr_blocks.1": "xxx.xxx.xxx.xxx",
                            "ingress.795105269.from_port": "80",
                            "ingress.795105269.protocol": "tcp",
                            "ingress.795105269.security_groups.#": "0",
                            "ingress.795105269.self": "false",
                            "ingress.795105269.to_port": "80",
                            "name": "graphite_sg",
                            "owner_id": "133124267079",
                            "tags.#": "0",
                            "vpc_id": "vpc-1033c37e"
                        }
                    }
                },

terraform plan output:

~ aws_security_group.sg-fd249099-graphite_sg
    egress.#:                            "13" => "15"
    egress.1163740523.cidr_blocks.#:     "1" => "1"
    egress.1163740523.cidr_blocks.0:     "xxx.xxx.xxx.xxx" => "xxx.xxx.xxx.xxx"
    egress.1163740523.from_port:         "3306" => "3306"
    egress.1163740523.protocol:          "tcp" => "tcp"
    egress.1163740523.security_groups.#: "0" => "0"
    egress.1163740523.self:              "0" => "0"
    egress.1163740523.to_port:           "3306" => "3306"
    egress.1265039627.cidr_blocks.#:     "0" => "0"
    egress.1265039627.from_port:         "" => "9300"
    egress.1265039627.protocol:          "" => "tcp"
    egress.1265039627.security_groups.#: "0" => "0"
    egress.1265039627.self:              "" => "1"
    egress.1265039627.to_port:           "" => "9300"
    egress.1455892873.cidr_blocks.#:     "0" => "0"
    egress.1455892873.from_port:         "8000" => "8000"
    egress.1455892873.protocol:          "tcp" => "tcp"
    egress.1455892873.security_groups.#: "0" => "0"
    egress.1455892873.self:              "1" => "1"
    egress.1455892873.to_port:           "8000" => "8000"
    egress.1492328782.cidr_blocks.#:     "0" => "0"
    egress.1492328782.from_port:         "2023" => "2023"
    egress.1492328782.protocol:          "tcp" => "tcp"
    egress.1492328782.security_groups.#: "0" => "0"
    egress.1492328782.self:              "1" => "1"
    egress.1492328782.to_port:           "2024" => "2024"
    egress.1507188325.cidr_blocks.#:     "0" => "0"
    egress.1507188325.from_port:         "22" => "22"
    egress.1507188325.protocol:          "tcp" => "tcp"
    egress.1507188325.security_groups.#: "0" => "0"
    egress.1507188325.self:              "1" => "1"
    egress.1507188325.to_port:           "22" => "22"
    egress.1626615574.cidr_blocks.#:     "0" => "0"
    egress.1626615574.from_port:         "2003" => "2003"
    egress.1626615574.protocol:          "tcp" => "tcp"
    egress.1626615574.security_groups.#: "0" => "0"
    egress.1626615574.self:              "1" => "1"
    egress.1626615574.to_port:           "2004" => "2004"
    egress.1844393598.cidr_blocks.#:     "0" => "0"
    egress.1844393598.from_port:         "80" => "80"
    egress.1844393598.protocol:          "tcp" => "tcp"
    egress.1844393598.security_groups.#: "0" => "0"
    egress.1844393598.self:              "1" => "1"
    egress.1844393598.to_port:           "80" => "80"
    egress.2096605242.cidr_blocks.#:     "0" => "0"
    egress.2096605242.from_port:         "2013" => "2013"
    egress.2096605242.protocol:          "tcp" => "tcp"
    egress.2096605242.security_groups.#: "0" => "0"
    egress.2096605242.self:              "1" => "1"
    egress.2096605242.to_port:           "2014" => "2014"
    egress.2119869617.cidr_blocks.#:     "0" => "0"
    egress.2119869617.from_port:         "11211" => "11211"
    egress.2119869617.protocol:          "tcp" => "tcp"
    egress.2119869617.security_groups.#: "0" => "0"
    egress.2119869617.self:              "1" => "1"
    egress.2119869617.to_port:           "11211" => "11211"
    egress.2232337395.cidr_blocks.#:     "1" => "0"
    egress.2232337395.cidr_blocks.0:     "xxx.xxx.xxx.xxx" => ""
    egress.2232337395.from_port:         "9300" => "0"
    egress.2232337395.protocol:          "tcp" => ""
    egress.2232337395.security_groups.#: "0" => "0"
    egress.2232337395.self:              "1" => "0"
    egress.2232337395.to_port:           "9300" => "0"
    egress.2233629390.cidr_blocks.#:     "0" => "0"
    egress.2233629390.from_port:         "2513" => "2513"
    egress.2233629390.protocol:          "tcp" => "tcp"
    egress.2233629390.security_groups.#: "0" => "0"
    egress.2233629390.self:              "1" => "1"
    egress.2233629390.to_port:           "2514" => "2514"
    egress.3067810025.cidr_blocks.#:     "0" => "0"
    egress.3067810025.from_port:         "" => "8080"
    egress.3067810025.protocol:          "" => "tcp"
    egress.3067810025.security_groups.#: "0" => "0"
    egress.3067810025.self:              "" => "1"
    egress.3067810025.to_port:           "" => "8080"
    egress.3167104115.cidr_blocks.#:     "0" => "0"
    egress.3167104115.from_port:         "443" => "443"
    egress.3167104115.protocol:          "tcp" => "tcp"
    egress.3167104115.security_groups.#: "0" => "0"
    egress.3167104115.self:              "1" => "1"
    egress.3167104115.to_port:           "443" => "443"
    egress.3199703634.cidr_blocks.#:     "2" => "2"
    egress.3199703634.cidr_blocks.0:     "xxx.xxx.xxx.xxx" => "xxx.xxx.xxx.xxx"
    egress.3199703634.cidr_blocks.1:     "xxx.xxx.xxx.xxx" => "xxx.xxx.xxx.xxx"
    egress.3199703634.from_port:         "2213" => "2213"
    egress.3199703634.protocol:          "tcp" => "tcp"
    egress.3199703634.security_groups.#: "0" => "0"
    egress.3199703634.self:              "0" => "0"
    egress.3199703634.to_port:           "2214" => "2214"
    egress.3495437599.cidr_blocks.#:     "2" => "0"
    egress.3495437599.cidr_blocks.0:     "xxx.xxx.xxx.xxx" => ""
    egress.3495437599.cidr_blocks.1:     "xxx.xxx.xxx.xxx" => ""
    egress.3495437599.from_port:         "8080" => "0"
    egress.3495437599.protocol:          "tcp" => ""
    egress.3495437599.security_groups.#: "0" => "0"
    egress.3495437599.self:              "1" => "0"
    egress.3495437599.to_port:           "8080" => "0"
    egress.3685790759.cidr_blocks.#:     "0" => "2"
    egress.3685790759.cidr_blocks.0:     "" => "xxx.xxx.xxx.xxx"
    egress.3685790759.cidr_blocks.1:     "" => "xxx.xxx.xxx.xxx"
    egress.3685790759.from_port:         "" => "8080"
    egress.3685790759.protocol:          "" => "tcp"
    egress.3685790759.security_groups.#: "0" => "0"
    egress.3685790759.self:              "" => "0"
    egress.3685790759.to_port:           "" => "8080"
    egress.3743725295.cidr_blocks.#:     "0" => "1"
    egress.3743725295.cidr_blocks.0:     "" => "xxx.xxx.xxx.xxx"
    egress.3743725295.from_port:         "" => "9300"
    egress.3743725295.protocol:          "" => "tcp"
    egress.3743725295.security_groups.#: "0" => "0"
    egress.3743725295.self:              "" => "0"
    egress.3743725295.to_port:           "" => "9300"

Introduce Plugin-based architecture

WHY

Terraform itself supports a lot of providers, e.g. AWS, DigitalOcean, GCE, DNSimple ... Terraforming should also support them in the future.

I created terraforming-dnsimple to export DNSimple records & zones into tf & tfstate. This is a independent CLI tool, and maybe incompatible with (AWS-supported) Terraforming now. We want to use other provider with the same command terraforming, however, I don't want to include dependencies of other providers in this repository.

WHAT

Implement plugin-loader in Terraform. Plugin-loader loads gems named terraforming-{provider} automatically.

This is a sample code to find the latest terraforming-{provider} gems from installed gems.

specs = Gem::Specification.find_all.select { |spec| spec.name =~ /\Aterraforming-.+\z/ }.group_by { |spec| spec.name }.values.map do |gems|
  gems.sort_by { |spec| spec.version }.last
end

And also, remove CLI from current terraforming-dnsimple. It will have only the library.

REF

Ox gem 2.2.2 fails

Using the ox gem 2.2.2 fails with a name error, but 2.2.1 and earlier works. Steps to repeat:

: dklopp ; gem install ox -v 2.2.2
Fetching: ox-2.2.2.gem (100%)
Building native extensions. This could take a while...
Successfully installed ox-2.2.2
Parsing documentation for ox-2.2.2
Installing ri documentation for ox-2.2.2
Done installing documentation for ox after 0 seconds
1 gem installed
: dklopp ; terraforming
/Users/dklopp/.rbenv/versions/2.2.2/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:69:in require': uninitialized constant Ox::Raw (NameError) from /Users/dklopp/.rbenv/versions/2.2.2/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:69:inrequire'
from /Users/dklopp/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/terraforming-0.6.2/lib/terraforming.rb:2:in <top (required)>' from /Users/dklopp/.rbenv/versions/2.2.2/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:69:inrequire'
from /Users/dklopp/.rbenv/versions/2.2.2/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:69:in require' from /Users/dklopp/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/terraforming-0.6.2/bin/terraforming:3:in<top (required)>'
from /Users/dklopp/.rbenv/versions/2.2.2/bin/terraforming:23:in load' from /Users/dklopp/.rbenv/versions/2.2.2/bin/terraforming:23:in

'
: dklopp ; gem uninstall ox

You have requested to uninstall the gem:
ox-2.2.2

terraforming-0.6.2 depends on ox (>= 0)
If you remove this gem, these dependencies will not be met.
Continue with Uninstall? [yN] y
Successfully uninstalled ox-2.2.2
: dklopp ; gem install ox -v 2.2.1
Fetching: ox-2.2.1.gem (100%)
Building native extensions. This could take a while...
Successfully installed ox-2.2.1
Parsing documentation for ox-2.2.1
Installing ri documentation for ox-2.2.1
Done installing documentation for ox after 0 seconds
1 gem installed
: dklopp ; terraforming
Commands:
terraforming asg # AutoScaling Group
terraforming dbpg # Database Parameter Group
terraforming dbsg # Database Security Group
terraforming dbsn # Database Subnet Group
terraforming ec2 # EC2
terraforming ecc # ElastiCache Cluster
terraforming ecsn # ElastiCache Subnet Group
terraforming eip # EIP
terraforming elb # ELB
terraforming help [COMMAND] # Describe available commands or one specific command
terraforming iamg # IAM Group
terraforming iamgm # IAM Group Membership
terraforming iamgp # IAM Group Policy
terraforming iamip # IAM Instance Profile
terraforming iamp # IAM Policy
terraforming iamr # IAM Role
terraforming iamrp # IAM Role Policy
terraforming iamu # IAM User
terraforming iamup # IAM User Policy
terraforming nacl # Network ACL
terraforming nif # Network Interface
terraforming r53r # Route53 Record
terraforming r53z # Route53 Hosted Zone
terraforming rds # RDS
terraforming rt # Route Table
terraforming rta # Route Table Association
terraforming s3 # S3
terraforming sg # Security Group
terraforming sn # Subnet
terraforming vpc # VPC

Options:
[--merge=MERGE] # tfstate file to merge
[--overwrite], [--no-overwrite] # Overwrite existng tfstate
[--tfstate], [--no-tfstate] # Generate tfstate
[--profile=PROFILE] # AWS credentials profile

: dklopp ;

name clash for 'default' security groups when combining classic + vpc, or several vpcs

When using terraforming sg in a context where you have either classic + vpc or several VPCs, terraforming creates resources with the name "default" (i-e lines of resource "aws_security_group" "default").

In my opinion there is several ways to solve this (it can be combined):

  • provide an option to ignore default groups in the resource generation + state generation
  • automatically ignore default groups if they have not been modified
  • use the 'vpc_id' as a prefix for VPC default groups

empty option "iops" on root_block_device causes syntax error

On root_block_device, with type "standard", iops are not normally provided. The generated output from terraforming however is an iops variable set to nothing, which causes syntax errors.

    root_block_device {
        volume_type           = "standard"
        volume_size           = 8
        iops                        =
        delete_on_termination = true
    }

Causes a syntax error when running terraform plan. Removing the empty iops deifnition resolves it.

Use aws-sdk

WHAT

Use aws-sdk gem to fetch resource information instead of awscli output

TODO

  • Database Parameter Group #9
  • Database Security Group #7
  • Database Parameter Group #8
  • EC2 #19
  • ELB #5
  • RDS #3
  • S3 #4
  • Security Group #11
  • VPC #18

$ in IAM policies needs to be escaped as $$ for terraform

I have some IAM policies that contain resources with variables, like the following:

      "Resource": [
        "arn:aws:iam::123456789:mfa/${aws:username}"
      ]

This produces a terraform syntax error:

Error loading config: Error loading Developers.tf: Error reading config for aws_iam_group_policy[Manage-Personal-MFA]: parse error: syntax error

The fix is to escape the policy variable as $$:

"arn:aws:iam::123456789:mfa/${aws:username}"

Mac OS X : certificate verify failed (Seahorse::Client::NetworkingError)

Hello,

On Mac OS X I have the following error :

/Users/llange/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:923:in `connect': SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (Seahorse::Client::NetworkingError)
    from /Users/llange/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:923:in `block in connect'
    from /Users/llange/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/timeout.rb:89:in `block in timeout'
    from /Users/llange/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/timeout.rb:99:in `call'
    from /Users/llange/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/timeout.rb:99:in `timeout'
    from /Users/llange/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:923:in `connect'
    from /Users/llange/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:863:in `do_start'
    from /Users/llange/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:858:in `start'
...

It seems linked with the following issue of aws-sdk-core and the fix suggested does indeed work.

If there is no adverse impact, could you please add the suggested fix ?

I added it to master/lib/terraforming.rb, just after the "require":

@@ -2,6 +2,7 @@
 require "ox"

 require "aws-sdk-core"
+Aws.use_bundled_cert!

 require "erb"
 require "json"

Thanks.

terraforming should import connected resources using variables

Right now, terraforming ec2 will produce a hardcoded list of related resources' internal AWS identifiers. But terraform supports variables, which allow you to, say, create security groups and ec2 instances on one terraform run, as opposed to creating these dependencies first, then update the ec2 resources' security group IDs in your aws_instance blocks.

terraforming should support importing connected resources, such as security groups, and connecting them to parent resources using variables.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.