Giter Club home page Giter Club logo

learning-aws's Introduction

Learning AWS

This readme contains information about my studies in AWS. The menu:


Global infrasctructure

We can separate the AWS Infrastructure in regions, and inside the region, in availability zones. All availability zone is a kind of Data Center. For example, São Paulo is one of this regions, in we have 3 availability zones inside São Paulo's region. (july 2021)

image

More info about this here.

Account

We need a credit card to create account. We can create and confirm. To increase your security, active Two-factor Authenticator.

2FA

To add Two factor authenticator, you can cick on your name on the right top of the screen, My Security Credentials

image

After this click on Active MFA:

image

Select Virtual MFA device:

image

And finally, scan with your favorite app and add two consecutive numbers:

image

Your account now is more secure.

Billing alerts

Sometimes happens shit and any service can be running without our control and can produce a big payment account at the end of the month. To avoid this, we can create an alert of billing.

To do this, you can cick on your name on the right top of the screen, My Billing Dashboard:

image

Go to Billing Preferences and check for Receive Billing Alerts. After this, you need to save preferences:

image

After this, we can click on Manage Billing Alerts to go to configure it on CloudWatch. Click on Billing on left menu inside Alerts and click on button Create Alert.

image

Especify the threshold. For example, 5 dolars. After this, click next:

image

To configure the notification, we need to create a new SNS topic. Especify the name and add a list of emails to send the notifications alerts. After fill this fields, click on Create topic, and click on next after this.

image

Fill the alarm name and the alarm description and click on next.

image

After this, we see a resume page and click to finalize. The alarm is created. On the top of the list appear a message telling us that some recipients does not confirm the subscription.

image

Go to the inbox email and confirm clicking on the link.

image

Now, we will receive notifications when the billing increase more than 5 dollars.

We can create categories, for separete by service or tags, clicking cost categories inside Billing dashboard:

image

Select options, for example service EC2 and Create Cost Category:

image

We can see our category and explore it:

image

Budget

We can do the same configuring a budget

Cloudtrail

Cloudtrail is different than Cloudwatch. Cloudtrail is focused on monitoring actions, like who create a bucket, or command executed on CLI.

AIM

The account user is called root user. We can create other users, and group it by permissions. All the information about IAM we can find on here or selecting from Services pages:

image

Creating groups

To access to groups, click on User groups. After this, click on the button create group.

image

Select a name (without spaces), choose policy/policies, and click on button create group.

image

If we open a group and go to the Permissions tab, we can see the policies of the group. We can click on any policy and go to the detail policy:

image

On policy, we can see the details of the policy and as Json format:

image

Users

Two types of users for acces type:

  • Access by console
  • Access by API: Access key and Secret key.

On the left menu, we can go to users page, or go direct click here. Click Add users button.

The next page, we can especify user name (without spaces), access type and password (if select console access), and click on next:

image

After this, select a group (you can copy permissions for other user or create new permissions too):

image

Set tags if you want and create user:

image

On resume page, we can see a link direct to our "corporation" console, to login directly:

image

  • note: After create a Programatic access user, on resume page, copy access key and secret key, because is not possible get again after leave this page. For console access, we can reset the password, but we can not get again after leave the page.

Policy password

We can apply policies for password. Go to Account settings inside IAM page, and click on button Change password policy:

image

The next options are available for password policy:

image

AWS Cli

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

Install

We can follow the instructions from AWS Cli site

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

We can see version with command aws --version

Configure

First of all, we need to create a user kind Access by API. See the user creation section.

After this, we can run command:

aws configure

We need three informations:

  • Access key
  • Secret key
  • Default region - we can digit us-east-1
  • Default output format - we can digit text, json or table

After this, we can see the configure profile runing command aws configure list.

Usage

Today, a good practice is use a linux EC2 instance to conect to AWS Cli.

S3

Amazon Simple Storage Service Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance.

AWS offers 5 different types:

  1. S3 Standard: for general-purpose storage of frequently accessed data;
  2. S3 Intelligent-Tiering: for data with unknown or changing access patterns;
  3. S3 Standard-Infrequent Access (S3 Standard-IA) for long-lived, but less frequently accessed data;
  4. S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data;
  5. Amazon S3 Glacier (S3 Glacier) for long-term archive and digital preservation.
  6. Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation.

image

the cost decreases for the top to the bottom.

To see the pricing, go to the S3 pricing page

You can also access to the S3 calculator page

note: Max size file: 5 Terabytes

Creating a bucket

The bucket site is global, is not especific for a region. So, when we enter on bucket site on the top of the page we can see Global as selected Region.

We can create a bucket clicking on the button:

image

We need to create an unique name in the world, because this name will be used as a DNS. We need here to select the region of this especific bucket, and we can copy settings from existing buckets:

image

After create the bucket, it appears on the list:

image

We can upload something click on upload, inside the bucket details:

image

We can add files and click on update:

image

(Before update we can select in properties, the kind of storage)

image

On the summary, we can see de ARN (Amazon Resource Name) and the object URL, that use the unique name of bucket.

image

We can change the storage class for an object, but all changes can produce cost, because you are moving objects between zones.

Permissions

By default, buckets do not permit allow accesses. You can see here:

image

We can change it and uncheck Block all public access. But this only allow apply ACL policies, but we do not configure anything.

Go to the object uploaded, on permissions, and we can see the ACL configuration. Click on edit:

image

On the next page, we can check for Everyone read, accept the disclamer and save the changes.

image

The object url must be work!

  • note: To give write permissions, we must to use API or AWS Cli

Versioning

We can allow versioning for an especific bucket. Consider that all versions of the same file "consume" space on the bucket, you pay for all versions, because for S3 we have 10 files, no 1 file with 10 versions. If we delete the file, we already have the 10 version. We can restore "deleting" the last version with type Delete marker.

Lifecycle

We can configure policies for the objects, like expire old versions (if versioning is active), move between storages, and so on.

Inside a bucket we can click on Management tab.

image

And we can create lifecycle rule clicking on button:

image

We can configure some actions on Lifecycle rule actions, to delete, move between storages classes, etc.

image

Replication

We can create a new bucket in an other Region to contains replication data from other bucket.

To enable replication, we can create a new replication rule, acessing on Management tab, inside the origin bucket.

image

If we not enabled the versioning for the origin bucket, this advertise will be show and we will need to enable it.

image

After this, we can create a name for the rule, and we can decided if we will replicate all objects on bucket or only a few elements filtering by tag or regex.

image

We choose the destination bucket and mark to create new IAM role:

image

And we choose a data class storge, perhaps to save money with the replication bucket:

image

The replication does not happens for existing objects, but we can force it using CLI.

note: Delete is out of replication, so if we delete objects on origin bucket, the same object is not deleted on the replication bucket.

Using CLI to copying existing objects

First of all, we can list all buckets with the instruction aws s3 ls.

For copy the files we can use the command aws s3 cp --recursive s3//<bucket origin> s3://<bucket destiny>

$ aws s3 cp --recursive s3://iundarigun-sample-bucket s3://iundarigun-sample-bucket-backup

Storage Gateway

We can install a software on our on-premise datacenter to replicate some data to S3. For this we have three types of storage:

  • File gateway: Pictures, videos, pdfs, etc. NFS
  • Volume gateway: Operations, vmware, etc. ISCSI
  • Tape gateway: Backups. Glacier

If this software is not enough, you can use SnowBall. More information here

Adding a site

We can use S3 to storage a site. For this, we create a new bucket without permission restrictions.

After this, we can configure the bucket as a website storage. Go to properties:

image

Scroll the page and click on Edit button on Static website hosting section:

image

Mark as Enable and especify the index and the error document. Click on save:

image

After this, we can get the url on the view of the section:

image

After this, we can upload the files.

CloudFront and CDN

CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally

CDN replicate data to put objects in Availability zones near to request zone. It is a kind of cache. We use CDN for static elements on a web site.

Creating CloudFront

First of all, create a bucket, and add public video. In my case, the video is in this url: https://iundarigun-cdn-bucket.s3.amazonaws.com/sample.mp4

To do this, we need to create a cloudFront distribution. If you enter on CloudFront Service at first time, perhaps your screen is like this:

image

If you have play with it before, you can click directly on Create Distribution button on distributions section.

image

For the origin, we choose the bucket and enter a name:

image

We can choose the storage class, considering the cost:

image

And we can enable the access logs, choosing a bucket and a log prefix name.

image

After create, this action may take a few minutes:

image

Now, we can use de domain name to access to the video.

S3 Transfer acceleration

It is the revers CDN. It happens to upload files to S3 and use the Cloudfront architecture to do it local and replicate to others regions.

To do this, you can open a bucket, go to properties:

image

scroll down the page and click on edit on Transfer acceleration section:

image

Click on Enable. We have a new endpoint to do upload:

image

EC2

Elastic Compute Cloud (EC2) is a web service that provides secure, resizable compute capacity in the cloud.

We have some price plannings:

  • On-demand: You pay for compute capacity by the hour or the second depending on which instances you run.
  • Reserved instances or saving plans: is used by contract, and can be 75% less than on-demand price.
  • Spot instances: used for get less prices on-demand
  • Dedicated host: physical server. Can be used to reused server licences

Understanding kinds of instances

Prefix names on EC2 instances:

  • C: Compute optimazed
  • G: Graphics. Better video card.
  • D: Dense storage
  • R: Memory optimaze
  • M: General
  • I: High speed storage - Database
  • F: For programming purpose
  • T: Webservers
  • P: GPU - machine learning
  • X: Memory

Volumes

EBS: Elastic block storage, is a virtual disk. To mesure the performance of the volume, Amazon use IOPS, input/output operations per second. Some kinds of EBS:

  • GP2: General purpose SSD. Good price. 3.000 to 10.000 IOPS
  • Provisioned SSD (IO1): high intensitive - Database. 10.000 to 20.000 IOPS
  • ST1 (HDD): Data, log, backup. No boot (no OS)
  • SC1 Cold HDD: Like ST1, more chiper, infrequent access
  • Magnetic (Standard): HDD for infrequent access but accept boot.

Creating an EC2

Step by step, we can create a new instance, for example windows server. We need to go to EC2 service site and click on Launch instance button:

image

We can see that exists 7 steps:

image

Choose the AMI (Amazon Machine Image). We can filter by free tier only and select the desire machine:

image

Select kind of instance type:

image

We can configure some details about instances, like number of instances, network, and so on

image

We can configure storage (EBS) and add more volumes if it is necessary

image

We can add also some tags, that we can use after:

image

We can create a new security group and choose some properties. After this we can launch:

image

The review page:

image

After run, we need to create some keys to access, do download and lanch the instance: image

After this, instance is launch: image

On list of instances, we can select our instance and connect it:

image

To connect using RDP, we need to generate a password. And for this, we need to use the downloaded key file previously:

image

After play, we can delete (terminate) our ec2:

image

And confirm delete:

image

Creating web server linux instance

Create a new instance selecting Linux and configuring a new policy security:

image

After launch, we can connect it using ssh.

to connect, change access mode to pem file.

$ sudo chmod 400 linuxwebserver.pem

And connect to using ssh:

ssh -i linuxwebserver.pem ec2-user@<ec2-url>

To enable server, we can install and add the index.html:

sudo su
yum install httpd
nano /var/wwww/html
service httpd start

Now, we can access it using our browser.

Playing with volumes

When we create a new EC2 instance, we can do a couple of things with the volumes. For example, we can create a snapshot. Basically, it is a copy of the volume at the moment of the creation of snapshot.

We can go to volumes, select the volume and click on create snapshot:

image

We can give a name and finish it:

image

If we go to Snapshots, we can create volume or create image from this snapshot:

image

If we create a volume, we can change from Availability Zone, Size or Volume type:

image

Note: to move inter regions, not only Availability Zones, we need to copy the snapshot, on the previous image menu, option copy.

If we want create an image, this image will be able to create new EC2 instances:

image

After create, we can see on AMIs section:

image

If we want launch a new EC2 instance we can find this new image on the list of selection:

image

Adding monitoring

We can add monitoring to our EC2 instances. We can enable this on creation or after creation on tab Monitoring.

We can go to Cloudwatch and in all alarms menu, we can click on create alarm button. First of all we need to select the metric:

image

For EC2, select it:

image

Select Per-instance Metrics:

image

And select your instance and metric. In this case, CPU utilization:

image

Select the threshold:

image

And select SNS topic:

image

If the instance get more than 80% of CPU usage for a minute, we will received an email.

Using Roles to configure AWS Cli inside EC2 instance

A good practice is use an EC2 instance to run cli commands. But instead connect using a public key and private key, can be better create a role.

For this, we can go IAM site and click on Roles menu. We click on create role button:

image

Select EC2:

image

Select the desired accesses:

image

And give a name and save:

image

On the instance, we can add without reboot a IAM Role:

image

And select the created role:

image

Now, connecting to the instance, we can run aws cli commands

Bootstrap example

We can configure an EC2 instance to execute some tasks when create it, like update so, create something, etc

We will add some commands to test it.

  1. Update SO
  2. install httpd and configure index.html
  3. configure to start service when reboot
  4. create a bucket
  5. copy index.html into the bucket

For aim this, we create a new EC2 instance. The different parts are in Step 3: Instance details. We need to configure the IAM Role:

image

And we configure the bootstrap exec instructions:

image

The instructions in text:

#! /bin/bash
yum update -y
yum install httpd -y
service httpd start
chkconfig httpd on
cd /var/www/html
echo "<html><h1>site online</h1></html>" > index.html
aws s3 mb s3://iundarigun-bootstrap-bucket
aws s3 cp index.html s3://iundarigun-bootstrap-bucket

After this, we have a web server!

EFS - Elastic File System

It is the solution to share volumes enter EC2 instances. This approach is not possible with standard volumes EBS.

Load balancer

Three kind of LB in AWS.

  • Application Load Balancer: Http and Https
  • Network Load Balancer: Network Layer (TCP)
  • Classic Load Balancer (or elastic LB): is a legacy Load Balance -> Http/https/tcp

First of all, we need to create a target group:

image

Choose a name for this target and instances:

image

And especify the healths:

image

After this, we add instances to target group and click on create:

image

After this, we create a load balancer:

image

Select Http/Https type:

image

Type a name and select at least two availability zones:

image

This advertice is because we doens't select https

image

Select a security group:

image

And select the target group create previously:

image

After this we can finish and click on create button:

image

The Load balancer can delay a couple of minutes to finish and after this we can use a public url:

image

Metadata

Metadata is information that EC2 instance can give to you about it. Information like public ip, security groups, hostname and so on. To access to this information without login on AWS console, we can get it using curl and a standard url: curl http://169.254.169.254/latest/meta-data. This curl returns a list of available informations:

image

If we want any information, we can invoke the same url with the desire information at the end of the url:

image

Autoscaling

We can create policies to autoscaling EC2 instances, to able more or less capacity depends on metrics.

First of all, we need to create a Launch configuration. Go to the site page on EC2, and click on create Launch configuration button:

image

Give a name and select the AMI and instance type:

image

Configure the startup script of the machines:

image

And select the security group:

image

After this we can create the launch configuration:

image

Now, we need to create an Auto scaling group:

image

Give a nane and click on switch to launch configuration:

image

Select the configuration create before and next:

image Select subnets to garantee the servers will be creates on differents availability zones:

image

Choose time to do health check:

image

And select the policy to autoscaling, and finish configuration:

image

We can see the instances created by Scaling group:

image

When we top the server CPUs, the scaling policy add more machines. And when the CPU average downs less than threshold, scaling policy decrease the number of instances.

Database

  • Relational databases:

    • Multi availability zone: Disaster recovery
    • Read replica: copy database to read
    • Aurora is an Amazon solution. Faster than other solutions, and chipper than other solutions. It is compatible with MySql and Postgres.
  • No SQL (DynamoDB):

    • document-oriented or key-value
    • SSD and 3 datacenters
    • configurable for consistent reads (acceptingh delays more than second to get new reads) or for strongly consistent reads (less than a second)
  • ElastiCache:

    • Faster, scalability. It is an in-memory cache system.
    • Two types: Memcached (objects) and Redis (Key-values and multi AZ)
  • Redshift:

    • Is a warehouse solution for big data
    • Single node or compute node (128 instances)
    • It is not multi AZ

Creating Relational Database (RDS)

To create a relational database, we must to access a RDS page and click on create database button:

image

We choose MySQL (we doens't choose Aurora because we will create an Aurora later):

image

We choose Free tier template, choose a name to database, a username and a password:

image

We let selection of free tier machine:

image

We choose No public access on connectivity options:

image

And we choose a name to the database and create it:

image

If you go to details of database, we can get the url of database. We will use it to connect from EC2 instance

image

The important point is about security group. We leave the default security group to database, we need to add this group on EC2 instance.

Backup

  • Automated: 1 - 35 days - For seconds in S3. It is free.
  • DB Snapshot: Manual

To able Multi-AZ we need to able automatic backup. After this we can click on modify button:

image

Select Create a standby instance for a multi-AZ deployment.

image

We have an advertice, and we can select a manteinance window to do it or apply immediatly:

image

To create a read replica, we can select the database ansd click actions, create read replica menu option:

image

Choose a name and a region and finish:

image

On database details we can see the two databases:

image

Aurora

Creation of Aurora database is similar than RDS MySQL. Select create database, select Aurora and, por example, we can select Postgres compatibility. Select Provisioned too:

image

Especify the name of database, username and password, and the size of the machine:

image

Let security group default:

image

Select a name for database, select priority and click on create database:

image

After create, we can add a read replica, selecting actions:

image

Select a name and the machine size:

image

And select priority and click on create:

image

The result:

image

Route 53

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses a way to route end users to Internet applications by translating names.

We can apply rules about geolocation (forbiden or allow by location based), failover and so on.

  • Simple routing policy: We configure our Route 53 with a group of ips, for example a list of EC2 ips. Any time that anyone asks for the dns, translates for a random of ips configured
  • Weighted routing policy: We can assign percentage for different ips. We create a couple of policies, one for IP and percentage. We can aslo add a health check to discard a route if the endpoint is down
  • Latency routing policy: We can configure route 53 to allow to calculate latency between the request client and the ip machines servers
  • Failover routing policy: We can configure a primary server and a secundary server to allow use secundary as a failover
  • Geolocation routing policy: This policy allow to offer differents contains based on geolocation. We need to configure a enter for all location that we want to consider, and an enter for a default location.
  • Geoproximity routing policy: We can create a complex flow of decisions mixing different metrics, geolocation, failover, latency, etc
  • Multivalue answer routing policy: Is the same of simple routing policy adding health check. For this reason, we need to create an enter for each ip.

VPC

Amazon Virtual Private Cloud is a service that lets you launch AWS resources in a logically isolated virtual network that you define.

Private Ips v4

  • class A: 10.0.0.0 - 10.255.255.255
  • class B: 172.16.0.0 - 172.31.255.255
  • class C: 192.168.0.0 - 192.168.255.255

The others ips are publics.

Creating a VPC

We go to create a VPC, using a scenario like this:

image

For this, first of all we need to create a new VPC.

image

Select a name and put an ip range:

image

After create, we can see that we create automatically a route table and a network ACL.

image

After this, we create a subnet:

image

We need to specify the vpc, a name and a ip range:

image

If we want to connect with internet, we need to create an internet gateway:

image

Select a name and create:

image

We can see that is not attached, so we need to attach to VPC:

image

Select the VPC:

image

For route table that VPC creates automatically, we need to allow comunication with all internet, so we need to edit routes:

image

And add 0.0.0.0 as a destination with target of internet gateway create previously:

image

After this, we need to associate subnet to this route table:

image

Select the desired subnets:

image

And finally, we need to alloy subnet to assign public ip automatically when create something:

image

image

For the the second subnet, we want a Nat to access to internet to protect the resources into the subnet. For this, we create a Nat Gateway:

image

Especify the subnet with internet access and select Allocate Elastic IP:

image

After this, we need to create a new Route table:

image

Associate it for the VPC:

image

And Edit routes to route all internet requisitions to the nat gateway:

image

And edit subnets on Route table to add the private subnet:

image

ACL

ACL is a list of rules to allow or deny some actions (ip). It works with priorities, so the first ACL matches, stop to search. For the order, it is used the rule number.

Note: Take care with the effimer ports

VPC Endpoints

An endpoint is a connection between a VPC and a AWS service. This is a good approach to allow secure connections without use internet network.

SQS and SNS

Simple Queue Service is a Queue solution by AWS. To create a queue, go to SQS and click on create queue:

image

We can choose a Standard or a Fifo. Fifo try to garantee an order into time window. We give a name. The visibility timeout is the time that the message on the queue disapear waiting the consume:

image

We can configure a Dead letter queue if the Maximium recieves ends:

image

SNS

It is a topic or subscribe solution. We can create a topic from SNS page:

image

We can create a FIFO or Standard. If we choose FIFO, only fifo queues can be subscribed:

image

To subscribe SQS to SNS we need to do from SQS, because if we do from SNS, we will need to add permissions to SQS to publish. Doing from SQS, the permissions are given automatically:

image

Choose the topic:

image

We can view the subscription details:

image

If we want receive the same message, we need to change the option raw message delivery. If we don't change this option, the message will have a wrapper information from SNS.

image

Kinesis

It is an streaming data solution. We have three different types:

  • Streams: Save data from 24 hours to 7 days.
  • Firehose: Doesn't save data
  • Analytics: Stadistics for others two kinesis types.

Today, exists an other service called Amazon MSK (Managed Streaming for Kafka).

API Gateway

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.

To create a new gateway, we can access to API Gateway and click Create API:

image

Choose Rest API, and build:

image

We can import a json Open API:

image

To test, we can click on test of the method:

image

We can create a new method manually, instead of importing swagger:

image

We configure the service, the availability zone and the others things:

image

We can add url parameters:

image

And we can configure response on Integration Response:

image

To deploy, we can click on actions, on Deploy API menu item:

image

Select new stage and give a name:

image

Security

To add security on API Gateway, we can use an API Key:

image

Select a name:

image

The API Key is generated:

image

To use it, we need to create a Usage Plan to vinculate it:

image

Select a name, the throtlling, Quota and click Next:

image

Add the API and the stage:

image

And, at last, select the API Key created before:

image

To add this security to an endpoint, we go to the request:

image

And select Api key Required:

image

Now, after deploy, we will need to use a header x-api-key with the key value:

image

Lambda Serverless

AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes.

To create a Lambda function, we need to got to lambda, menu functions, and click on create function:

image

We have some options. We can give a name, select a language and select permissions options:

image

Then, we add a trigger clicking on Add trigger:

image

Select an API gateway, select as HTTP and security Open:

image

Now, we can add a python example and deploy it:

image

After this, we can use the API Gateway url to access.

CloudFormation

CloudFormation gives us an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles, by treating infrastructure as code. It uses Yaml or Json.

A yaml sample:

---
Resources:
  WebServers: ## Group - A free name
    Type: AWS::EC2::Instance 
    Properties:
      AvailabilityZone: us-east-1a
      ImageId: ami-0c2b8ca1dad447f8a 
      InstanceType: t2.micro

  MyBuckets:
    Type: AWS::S3::Instance
    Properties:
      AccessControl: PublicRead
      BucketName: "iunda-cloudformation-sample-bucket"

We need to go to cloudformation page and create an stack:

image

Upload a file:

image

Choose a name:

image

After pass some steps and confirmations and the stack is ready:

image

To remove stack (and the created resources), click on delete:

image

Other services

A little note about other services:

  • SWF: Simple Workflow service is a solution to create workflows with automation and human interaction.
  • Elastic Transcoder: It is a solution to transcode files enter formats, for example transform a full HD video in MP3, or in 640 mp4.
  • Coginto: it is a web identity federation. It is an authentication and an authorization solution.

Terraform

Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. It codifies cloud APIs into declarative configuration files.

Terraform is multicloud, but the same declaration for AWS does not work for other cloud providers.

Installing CLI

Install terraform accessing terraform.io. Terraform uses our aws configure cli.

For convention, we put all terraform files on a folder. We start like this:

$ terraform init

To run a preview of the files content, we can use the next instruction:

$ terraform plan

To apply changes, we use this:

terraform apply

Terraform files

We have some examples on terraform folder. For convention, we declare a main.tf with the main declarations, we use variables.tf to define variables, and output.tf to allow display some informations.

Declaring provider

provider "aws" {
    region                  = "us-east-1"
}

provider "aws" {
    alias                   = "us-east-2" # Alias for different regions
    region                  = "us-east-2"
}

Declaring variables

# Declaring a map
variable "amis" {
    type = map

    default = {
        "us-east-1" = "ami-0c2b8ca1dad447f8a"
        "us-east-2" = "ami-0443305dabd4be2bc"
    }
}
# Declaring a list
variable "cdirs_ip6" {
    type = list
    default = ["::/0"]
}

# Declaring a string
variable "keyname" {
    type = string
    default = "terraform-aws"
}

To use variables: var.keyname or var.amis["us-east-1"]

Declaring an EC2

# on this terraform environment, the aws_instance call "dev"
resource "aws_instance" "dev" { 
    count = 3 # number of instances
    ami = var.amis["us-east-1"]
    instance_type = "t2.micro"
    key_name = var.keyname
    tags = {
        Name = "dev-${count.index}" ## Name of instance
    }
    vpc_security_group_ids = ["${aws_security_group.ssh-access.id}"] # Reference to security group id
}

we can connect on ec2 instance by ssh -i ~/.ssh/terraform-aws ec2-user@<Public ip or dns>

Destroy environment

We can destroy an especific element or we can remove all:

$ terraform destroy -target aws_instance.dev4
$ terraform destroy

Ssh keys

To use some services on AWS, we need to create or configure a ssh key. This key can be imported, so we can generate one and use on all our service, aws or not, and across regions.

To generating key-par:

$ ssh-keygen -f terraform-aws -t rsa

This instruction generate two files, the private key and the public key. We can move the private key to our ~/.ssh folder. After this, we can import public key on AWS. We go to key pair menu on EC2 and click on action Import key pair:

image

Give a name and browse the .pub file.

image

Terraform cloud

Terraform in local is not a good idea for a team. We can use terraform cloud to manage with a team. We can create an account and create a workspace.

https://www.terraform.io/cloud


References

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.