Giter Club home page Giter Club logo

ec2-classic-resource-finder's Introduction

EC2 Classic Resource Finder

EC2 Classic Resource Finder 2.0 is here. Read more below.

EC2-Classic Networking is Retiring Find out how to prepare here

We launched Amazon VPC on 5-Sep-2009 as an enhancement over EC2-Classic and while we maintained EC2-Classic in its current state for our existing customers, we continuously made improvements, added cutting edge instances, and networking features on Amazon VPC. In the spirit of offering the best customer experience, we firmly believe that all our customers should migrate their resources from EC2-Classic to Amazon VPC. To help determine what resources may be running in EC2-Classic, this script will help identify resources running in EC2-Classic in an ad-hoc, self-service manner. For more information on migrating to VPC, visit our docs.

Version 2.0 of this script is now available, named py-Classic-Resource-Finder.py. This new iteration still loops through all regions where EC2-Classic is supported and determine if EC2-Classic is enabled and what, if any, resources are running or configured to run in EC2-Classic. The multi-account-wrapper is now built in and uses command line arguments to run. Additionally, use of multiple AWS Credential profiles is now supported. This will output to a set of CSVs in a folder created for each account it is run against. The script is now written in Python and uses Boto3. It runs using multiprocessing to improve runtimes. Please note, because this runs multiple processes simultaneously it may consume more CPU. It is suggested not to run this on the same instance, or computer that is running any critical workloads that may become deprived of computational resources while this is running. Additionally, this fixes an issue with the version 1 script where AWS ElasticBeanstalk Environments with a space in the name may render a false positive. Any errors rendered in the Error CSV should be investigated to determine if the output was still accurate.

Known issues / Notes:

  • If you are running ElasticBeanstalk Environments in the Default VPC by not specifying a VPC, this may produce a false positive.
  • If you are creating and terminating resources regularly, such as EMR clusters, this script does not identify terminated resources. If you have resources such as DataPipelines or AutoScaling Groups which create and terminate Classic EC2 Instances, as long as the DataPipeline or AutoScaling Group exists at the time the script is run it will be identified as configured to launch Classic resources, even if no Classic EC2 Instances are currently running.
  • Classic Load Balancers which are running in a VPC are not in scope for this retirement, only Classic Load Balancers which are not running in a VPC, and therefore running in EC2-Classic need to be migrated to a VPC as part of this retirement.

Requirements

This script is designed to run using Python 3 and requires the Boto3. Credentials must be pre-configured using the AWS CLI, or an instance IAM profile, if using Amazon EC2. You can read more about how to pre-authenticate here

  • To install the Boto3, follow the instructions here

Outputs

Currently, this iterates through all EC2 regions which support EC2-Classic and creates the following CSVs prepended with the date and time in a folder for each account it is run against:

File Name Description Output
Classic_Platform_Status.csv Regions with the ability to launch resources into EC2-Classic Region, Status (Enabled, Disabled)
Classic_EIPs.csv Elastic IPs allocated for EC2-Classic IP Address, Region
Classic_EC2_Instances.csv EC2 Instances provisioned in EC2-Classic Instance ID, Region
Classic_SGs.csv Security Groups configured in EC2-Classic Security Group ID, Region
Classic_ClassicLink_VPCs.csv VPCs with ClassicLink Enabled VPC ID, Region
Classic_Auto_Scaling_Groups.csv Auto-Scaling groups configured to launch EC2 Instances into EC2-Classic ASG ARN, Region
Classic_CLBs.csv Classic Load Balancers provisioned in EC2-Classic CLB Name, Region
Classic_RDS_Instances.csv RDS Database Instances provisioned in EC2-Classic DB Instance ARN, Region
Classic_ElastiCache_Clusters.csv ElastiCache clusters provisioned in EC2-Classic Cluster ARN, Region
Classic_Redshift_Clusters.csv Redshift clusters provisioned in EC2-Classic Cluster Identifier, Region
Classic_ElasticBeanstalk_Applications_Environments.csv ElasticBeanstalk Applications and Environments configured to run in EC2-Classic Application Name, Environment Name, Region
Classic_DataPipelines.csv DataPipelines configured to launch instances in EC2-Classic Pipeline ID, Region
Classic_EMR_Clusters.csv EMR Clusters that may be configured to launch instances in EC2-Classic Cluster ID, Region
Classic_OpsWorks_Stacks.csv OpsWorks stacks that have resources configured for EC2-Classic Stack ID, Region
Error.txt This outputs any errors encountered when running the script. print text of error outputs

Permissions

The script requires IAM permissions which can be configured using either aws configure, or an IAM role on EC2. The following permissions are required (against all resources):

  • autoscaling:DescribeAutoScalingGroups
  • datapipeline:GetPipelineDefinition
  • datapipeline:ListPipelines
  • ec2:DescribeAccountAttributes
  • ec2:DescribeAddresses
  • ec2:DescribeInstances
  • ec2:DescribeRegions
  • ec2:DescribeSecurityGroups
  • ec2:DescribeVpcClassicLink
  • elasticbeanstalk:DescribeConfigurationSettings
  • elasticbeanstalk:DescribeEnvironments
  • elasticache:DescribeCacheClusters
  • elasticloadbalancing:DescribeLoadBalancers
  • elasticmapreduce:DescribeCluster
  • elasticmapreduce:ListBootstrapActions
  • elasticmapreduce:ListClusters
  • elasticmapreduce:ListInstanceGroups
  • rds:DescribeDBInstances
  • redshift:DescribeClusters
  • opsworks:DescribeStacks
  • sts:GetCallerIdentity

###ElasticBeanstalk Specific Permissions

If you are utilizing ElasticBeanstalk, you will need the following additional permissions to identify environments and applications configured to launch resources in EC2-Classic. If you do not utilize ElasticBeanstalk, you can ignore the below permissions, and the script will continue to run successfully for all other services and produce an empty CSB for ElasticBeanstalk.

  • autoscaling:DescribeAutoScalingInstances
  • autoscaling:DescribeLaunchConfigurations
  • autoscaling:DescribeScheduledActions
  • cloudformation:DescribeStackResource
  • cloudformation:DescribeStackResources
  • ec2:DescribeImages
  • ec2:DescribeSubnets
  • ec2:DescribeVpcs
  • ec2:CreateLaunchTemplate
  • ec2:CreateLaunchTemplateVersion
  • rds:DescribeDBEngineVersions
  • rds:DescribeOrderableDBInstanceOptions
  • s3:ListAllMyBuckets

The following permissions to allow the identification of ElasticBeanstalk environments that launch resources in EC2-Classic can be limited to a resource of arn:aws:s3:::elasticbeanstalk-*

  • s3:GetObject
  • s3:GetObjectAcl
  • s3:GetObjectVersion
  • s3:GetObjectVersionAcl
  • s3:GetBucketLocation
  • s3:GetBucketPolicy
  • s3:ListBucket

###Multi-Account Permissions

  • organizations:ListAccounts
  • sts:AssumeRole

Requirements for multi-account usage

  • The IAM user which runs the script must be able to assume the role specified in each account in the organization (If STS AssumeRole fails, we simply skip running the input script against that account and print an error to the standard output)
  • The role name for the role being called must exist in every AWS account within the organization and have the same name (If STS AssumeRole fails, we simply skip running the input script against that account)
  • The role being called must have permissions to run all commands specified in the script. (For py-Classic-Resource-Finder see the permissions section above.)
  • If ExternalID is required, you must specify the value in the input for Multi-Account-Wrapper

Command line arguments

py-Classic-Resource-Finder.py can be called without any arguments and will be run against the account for the default configured credential.

All Accounts in an Organization

####With an External ID:

python3 py-Classic-Resource-Finder.py -o -r <role name> -e <external ID>

or

python3 py-Classic-Resource-Finder.py --organization --rolename <role name> --externalid <external ID>

####Without an External ID:

python3 py-Classic-Resource-Finder.py -o -r <role name>

or

python3 py-Classic-Resource-Finder.py --organization --rolename <role name>

Use Profile[s] in the Credential File

####Single Profile

python3 py-Classic-Resource-Finder.py -p <profile name>

or

python3 py-Classic-Resource-Finder.py --profile <profile name>

####Multiple Profiles

Use a comma delimited list of profile names. Do not put a space around the commas.

python3 py-Classic-Resource-Finder.py -p <profile name 1>,<profile name 2>,<profile name 3>

or

python3 py-Classic-Resource-Finder.py --profile <profile name 1>,<profile name 2>,<profile name 3>

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

ec2-classic-resource-finder's People

Contributors

amazon-auto avatar thescottmo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ec2-classic-resource-finder's Issues

Script is failing today when attempting to connect to certain regions

Script is failing today when attempting to connect to certain regions

`Could not connect to the endpoint URL: "https://datapipeline.us-west-1.amazonaws.com/"

Could not connect to the endpoint URL: "https://datapipeline.ap-southeast-1.amazonaws.com/"

Could not connect to the endpoint URL: "https://datapipeline.sa-east-1.amazonaws.com/"
`

Interestingly, it seems that I am unable to perform an nslookup for the three DNS names above. Other regions seem to resolve properly. Are others experiencing this issue?

Empty CSV files?

Hello!

All the generated CSVs are always empty, except for the "_timestamp__Classic_Platform_Status.csv" that lists this:
us-east-1, Disabled
us-west-1, Disabled
us-west-2, Disabled
eu-west-1, Disabled
ap-southeast-1, Disabled
ap-southeast-2, Disabled
ap-northeast-1, Disabled
sa-east-1, Disabled

Also, the error text file is empty.

I tried with

  • python3 py-Classic-Resource-Finder.py -p default
  • python3 py-Classic-Resource-Finder.py -p "default"
  • python3 py-Classic-Resource-Finder.py -p 'default'
  • python3 py-Classic-Resource-Finder.py

But the result is always the same (empty files except for "_Classic_Platform_Status.csv"). But this is strange as in the AWS account, there are at least four "classic load balancers" in eu-west-1.

Any ideas? Thanks!

(The files are generated in a folder with the AWS account id as name. And on the CLI, I can still do "aws ec2 describe-volume-status", "aws s3 ls", "aws ec2 describe-vpc-classic-link", ... . So there's nothing fishy with my cli.)

a continue statement will make this run faster for regions without classic

thanks for this script. good stuff. just wanted to add that by adding a continue statement after line 59 will avoid running unnecessary code so will go alot faster by restarting at the for loop.

as follows:

59 else printf "$region, Disabled\n" >> Classic_Platform_Status.csv ## If supported platforms is only VPC and does not include EC2, output the region and Disabled to a CSV
60 printf "$region, Classic disabled, skipping checks.\n"
61 continue

More details for Unknown status

Hi,

In the Outputs section of the README for the Classic_Platform_Status.csv file may need to also include the "Unknown" status with a recall to check the errors.txt file.
Or/And when the script end and errors.txt file exist write a warning, if you run the script you only see green mark even if all fail which give feeling that all is ok.

Hope it help.

China region is not supported

China AWS regions are not supported by Python and Shell scripts.
When I added specific to China regions - it not worked also, probably API should be updated as well. Not sure.

declare -a regions=('cn-north-1' 'cn-northwest-1')
    classicregions = ('cn-north-1', 'cn-northwest-1',)

Expected behavior

Lists contain Classic resources.

Current output

All lists are empty.

error.log file


An error occurred (UnsupportedOperation) when calling the DescribeVpcClassicLink operation: The functionality you requested is not available in this region.

Could not connect to the endpoint URL: "https://datapipeline.cn-north-1.amazonaws.com.cn/"

Could not connect to the endpoint URL: "https://opsworks.cn-north-1.amazonaws.com.cn/"

An error occurred (UnsupportedOperation) when calling the DescribeVpcClassicLink operation: The functionality you requested is not available in this region.

Could not connect to the endpoint URL: "https://datapipeline.cn-northwest-1.amazonaws.com.cn/"

Could not connect to the endpoint URL: "https://opsworks.cn-northwest-1.amazonaws.com.cn/"

Missing permissions for elasticloadbalancing and elasticmapreduce

Readme says I need to have these permissions:

elasticloadbalancing:DescribeLoadBalancers
elasticmapreduce:DescribeCluster
elasticmapreduce:ListBootstrapActions
elasticmapreduce:ListClusters
elasticmapreduce:ListInstanceGroups
opsworks:DescribeStacks

Editing my policy on the IAM Mangement Console, there doesn't seem to any service for elasticloadbalancing and elasticmapreduce (unless they're named completely differently than these keys).

In addition, when I try to add opsworks:DescribeStacks, upon saving it's not set.

When I run the tool, I get a few errors saying I'm missing:

  • elasticloadbalancing:DescribeLoadBalancers
  • elasticmapreduce:ListClusters
  • opsworks:DescribeStacks

Not sure if this indicates these things are already retired, or I'm not using them or what.

Why does it flag default security groups as EC2 Classic resources?

I ran the script, and it picked out one region to give something unusual.

Most of the CSVs are empty, except for Classic_Platform_Status.csv and Classic_SGs.csv ; so I found that Classic_SGs.csv contains a solitary security group named "default" in us-east-1, whereas I have no other resources in that region.

I saw that there is a closed issue (#18 ) where the person requested for more info on how to deal with such reports of a default security group being included in Classic_SGs.csv for no apparent reason.

Could the script be fixed instead to not include these default security groups in Classic_SGs.csv ? I don't understand why it doesn't include the default security groups in other regions, but only this particular region?

I also have a related stackoverflow question about this: https://stackoverflow.com/questions/69787759/ec2-classic-resource-that-cant-be-deleted-now-what?noredirect=1#comment123433839_69787759

Prompts for MFA on every region in every account

My AWS config requires MFA to assume a role.

I can understand if this tool prompted for MFA each time it assumed a role, once for each account (even though that is not necessary if implemented right).

However, this tool prompts once for every region in every account. This makes the tool unusable for me.

ElasticBeanstalk Environments with Classic LBs are not detected

The resource finder does not notify you when your ElasticBeanstalk Environments contain an active classic loadbalancer only if they aren't in a VPC. Since all CLBs are going away per Amazon direct support inquiry, the script should be updated to notice that your environment it utilizing one.

The Classic LBs also do not show up in Classic_CLBs.csv.

More information about default security groups

Hi,

There's some missing information on how to deal with default security groups.
After several run and cleanup all the csv files were empty (need to remove the files between each run as the script append data) except Classic_SGs.csv which list several SG in different region but all are default and when trying to delete in the interface this message is displayed:

This is a default security group. Default security groups cannot be deleted

If you can add some information on how to deal with this and if it's ok to request a conversion to VPC-Only for this case.

Other small improvement that may help when run the script several time, if you can append a timestamp for each line in the logs it can be helpful to see difference between the several run.

Why do ElasticBeanstalk environments that are Terminated show up in `Classic_ElasticBeanstalk_Applications_Environments.csv`?

We just started to run this script on our accounts/environments and we are noticing a 'flapping' where the tool flags issues in ElasticBeanstalk Environments not having a VPC and in each case the Environment has recently been terminated. E.g., of course it doesn't have a VPC associated because the resources have been removed.

Looking at this line I wonder if it should be changed to only consider Environments that are Ready?

Clarification on output of CSVs needed

Hello!

The end output of the script shows: "If no resources were found in EC2-Classic for a service, there was no CSV created."

However, each time this has been run, it has produced empty CSV files for each resource it checked other than the status CSV. When looking at the Classic_Platform_Status.csv, it shows that ec2_classic is disabled in all regions.

(the script was run with creds that have the correct access)

Is the line that is output at the end of the script correct and I should expect no CSVs if nothing is found? If not, it should be corrected and also reflected in the README.

Thanks!

AssertionError: cannot start a process twice

First time running this locally - it's a great tool, thanks for writing it! I'm using the latest version.

I encountered this issue on my first try:

$ python3 py-Classic-Resource-Finder.py --profile production
Checking the Classic platform status in us-east-1
Checking for EIPs in us-east-1
Checking for Classic EC2 Instances in us-east-1
Checking for Classic Security Groups in us-east-1
Checking for VPCs with ClassicLink enabled in us-east-1
Checking for AutoScaling Groups configured for Classic in us-east-1
Checking for Classic Load Balancers running in EC2-Classic in us-east-1
Checking for Classic RDS Instances in us-east-1
Checking for Classic ElastiCache clusters in us-east-1
Checking for Classic Redshift clusters in us-east-1
Checking for Classic Elastic BeanStalk Environments in us-east-1
Checking for Classic EMR clusters in us-east-1
Checking for Classic OpsWorks stacks in us-east-1
Checking for Classic Data Pipelines in us-east-1
Traceback (most recent call last):
  File "/Users/scott/src/external/ec2-classic-resource-finder/py-Classic-Resource-Finder.py", line 847, in <module>
    main(argparser(sys.argv[1:]))
  File "/Users/scott/src/external/ec2-classic-resource-finder/py-Classic-Resource-Finder.py", line 841, in main
    loopregions(classicregions, datapipelineregions, creddict)
  File "/Users/scott/src/external/ec2-classic-resource-finder/py-Classic-Resource-Finder.py", line 668, in loopregions
    process.start()
  File "/usr/local/Cellar/[email protected]/3.9.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/process.py", line 115, in start
    assert self._popen is None, 'cannot start a process twice'
AssertionError: cannot start a process twice

As you can see above, I'm running Python 3.9.9:

$ python3 --version
Python 3.9.9

I did get partial results from the single region and the errors file was empty after switching to my mfa profile.

Any hints?

Fails on Beanstalk applications with a space in the name

Version: ea774c5

We got a Beanstalk application with a space in it's name (Public Page), so L220 fails to describe the configuration.

Result:
In errors.txt you'll get:

An error occurred (InvalidParameterValue) when calling the DescribeConfigurationSettings operation: No Environment found for EnvironmentName = 'Page'.

and in Classic_ElasticBeanstalk_Applications_Environments.csv:

Public, Page, eu-west-1

Allow regions to be specified

Currently the tool assumes all known AWS Regions. Which is great as it makes the validate analysis complete!

However, if my organization has limited ourselves to specific regions the tool does not take that into consideration. It wastes a TON of time looking at regions that are not interesting.

Provide a means where the user can provide an argument to limit the regions that are validated.

PS. This issue/suggestion would probably never occurred if some form of proc forks were used. In that the main regions we are using take a lot of time to process and the unused could have easily just been done in the background and never resulted in any useful information.

Space in EB App Name cause incorrect data

Example App name: "Webserver App"
Example Env Name: "webserver-app-a"

In the script, currently, the delimiter being used for cut is ' ' (space)
for ebenvapp in jq -r '.Environments[] | .ApplicationName +" "+ .EnvironmentName' <<< $ebappraw 2>> errors.txt ## Loop over each application and environment pair
do
ebapp=cut -d " " -f1 <<< $ebenvapp 2>> errors.txt ## Extract the Application name
ebenv=cut -d " " -f2 <<< $ebenvapp 2>> errors.txt ## Extract the Environment name

  • this results in ebapp: "Webserver" & ebenv: "App"

Changing this to delimiter of ':' (colon) is a possible fix tested in our environment with spaces.
for ebenvapp in jq -r '.Environments[] | .ApplicationName +":"+ .EnvironmentName' <<< $ebappraw 2>> errors.txt ## Loop over each application and environment pair
do
ebapp=cut -d ':' -f1 <<< $ebenvapp 2>> errors.txt ## Extract the Application name
ebenv=cut -d ':' -f2 <<< $ebenvapp 2>> errors.txt ## Extract the Environment name

  • this results in ebapp: "Webserver App" & ebenv: "webserver-app-a"

-Not sure if this is necessary, but in the next line I also put double quotes ("") around the $ebapp
ebnsval=aws elasticbeanstalk describe-configuration-settings --application-name **"$ebapp"** --environment-name $ebenv --query 'ConfigurationSettings[*].OptionSettings[?Namespace==\aws:ec2:vpc`&&OptionName==`VPCId`&&Value!=`null`].OptionName' --region $region --output text 2>> errors.txt` ## If the environment is configured for a vpc return "VPCId"

False positives on RDS instances

The script currently checks if there are any VPC security groups attached to an RDS instance to determine if it is a classic instance or not. I believe the API call to check this must occasionally fail or be throttled and that the script does not handle this properly creating a false positive. Each time I ran the script across my organization of several hundred accounts and RDS instances I got a handful of false positives which were different each time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.