Giter Club home page Giter Club logo

website's Introduction

samhstn.com

samhstn.com

Requirements

$ docker --version
Docker version 19.03.12

$ python3 --version
Python 3.8.5

$ aws --version
aws-cli/2.0.40 Python/3.8.5

$ node --version
v14.8.0

$ elixir --version
Erlang/OTP 23 [erts-11.0.3] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe] [dtrace]

Elixir 1.10.4 (compiled with Erlang/OTP 23)

$ mix --version
Mix 1.10.4

$ mix phx.new --version
Phoenix v1.5.4

Local Setup

# clone repository
git clone [email protected]:samhstn/samhstn.git && cd samhstn

# install dependencies and run our tests and checks
MIX_ENV=test mix compile --force
MIX_ENV=test mix dialyzer
MIX_ENV=test mix sobelow --router lib/samhstn_web/router.ex --exit --skip
MIX_ENV=test mix format --check-formatted
mix test

# start the dev server
mix do compile, phx.server

AWS Setup

Checkout the infra documentation for guidance.

Pre-commit hook

To configure the pre-commit hook to run on every commit, run:

./pre-commit-hook

Docker

Our app runs in two stages:

  • Building our mix release.
  • Running the binary files from our mix release.

This can be achieved by setting up a .env file with the following contents:

export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export SECRET_KEY_BASE=g+Li...Fi+trohKSao4VOv5BWkEXAMPLE
export SAMHSTN_ASSETS_BUCKET=

We can generate these access keys for our docker IAM user in:

IAM Console > Security Credentials > Create access key.

We can now replicate our application running in production with:

docker-compose up --build

Note: These IAM user credentials should be deleted after running Docker locally.

website's People

Contributors

samhstn avatar nelsonic avatar

Stargazers

Roman avatar Simon avatar Samatar  avatar Dennis Houston avatar Tom avatar  avatar

Watchers

 avatar James Cloos avatar  avatar Dennis Houston avatar

website's Issues

Make use of SSM

Currently we pass parameters into our cloudformation template.

This is not a nice way to deal with our keys, and not how parameters should be used with cloudformation templates.

We should instead use ssm and kms to manage our keys following this article.

Reference our codebuild badge dynamically

We currently reference a hardcoded codebuild badge in our README.md.

This will change every time we re-deploy our ./infra/codebuild.yaml template.

We should instead reference a url, something like:

[![Build Status](https://samhstn.com/repo/samhstn/badge)](https://console.aws.amazon.com/codesuite/codebuild/projects/CodeBuild)

We will create an S3 bucket which will redirect to our CodeBuild badge (following this article).

This bucket redirect url will update with every deployment and always point to the correct place.

Configure iam user group which can only access samhstn resources

Currently the aws console is being cluttered by different resources from different projects.

It would be really nice if we created an Admin type user who can view, create, update or delete resources which are related to the samhstn project - and only the samhstn project.

That way we can have a cleaner view of exactly what we are working on at any time and not accidentally change anything not related to what we are working on.

These permissions should be set referencing aws tagged resources - where these users should only see resources tagged with samhstn and should always include the samhstn tag when interacting with any resources.

This should affect both the Management console access user as well as the programmatic access user.

Remove sass

This is overkill.

There should be no css pre-processing.

Maintaining this is going to be hard enough without the adding another package - adding dependencies should be avoided at all costs.

Add Docker base image

We currently work off the aws provided nodejs image.

We are going to be installing some of the same dependencies on every CodeBuild run (Cfn linter for instance).

We can save this installation time and reduce CodeBuild logic if we instead reference a samhstn image which we have control of.

We should deploy this image to ECR and reference it in each of our CodeBuild builds.

Add Dockerfile digest to ssm parameter

Our CodeBuild should update our Dockerfile on ecs if there are changes to be made.

The nicest way I can think of doing this is:

  • Create an ssm parameter called /Samhstn/DockerfileDigest
  • On every deploy, we should run a check if md5sum is the same
  • If it matches, don't update our ecr image, otherwise do.

The command to check if there is a match is:

echo "$DOCKERFILE_DIGEST  infra/Dockerfile" | md5sum -c -

Tidy folder structure and rename files

We should look to separate out some logic (fix duplicate data in the pipelines files), update the naming (especially buckets) and place buildspecs in a buildspecs directory.

The directory structure of ./infra/ should look something like this:

|--- infra/
       |--- README.md
       |--- buildspecs
             (|--- update_stacks.yml - coming later)
              |--- test.yml
              |--- deploy.yml
       |--- cloudfront.yml
       |--- route53.yml
       |--- s3.yml
       |--- codebuild.yml
       |--- master_pipeline.yml

Directory structure

Longer term plan for how the directory structure should look.

|-- .eslint-browser.js
|-- .eslint-node.js
|-- .eslint-lambda.js
|-- .npmrc
|-- .gitignore
|-- README.md
|-- package.json
|-- elm.json
|-- infra
    |-- README.md
    |-- Dockerfile
    |-- s3.yaml
    |-- cloudfront.yaml
    |-- keys.yaml
    |-- master_pipeline.yaml
    |-- route53.yaml
    |-- codebuild.yaml
    |-- ecr_repo.yaml
    |-- buildspecs
        |-- test.yaml
        |-- update_stacks.yaml
        |-- update_buckets.yaml
        |-- update_baseimage.yaml
        |-- deploy.yaml
    |-- lambda
        |-- email.js
|-- dev
    |-- README.md
    |-- index.js
|-- src
    |-- static
        |-- favicon.ico
        |-- logo.ico
    |-- js
        |-- index.js
    |-- elm
        |-- README.md
        |-- Main.elm
    |-- index.html
|-- dist

Update documentation

It would be good to separate out the current README.md like so

|- README.md
|- infra/
    |- README.md

as to not clutter the initial view and to separate out concerns a little.

Fix pipeline naming and badge

I ran the setup from scratch and found that:

  • the badge now references our codebuild via a different url.
  • there was a typo in the Configure our Codepipeline pipeline section in referencing our infra/master_pipeline.yaml file.

SNIMissingWarning

Our codebuild currently shows the following warning:

[Container] 2019/03/27 20:05:17 Running command aws s3 sync static s3://samhstn.com
/usr/local/lib/python2.7/dist-packages/urllib3/util/ssl_.py:369: SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
SNIMissingWarning

This might be updated when we reference our custom docker image (#55) with python upgraded.

Simpler and better reference to CodeBuild badge

We currently have this file: https://s3.amazonaws.com/codefactory-us-east-1-prod-default-build-badges/unknown.svg in our source code as static/repo/samhstn/badge.svg and added it to s3 - it isn't clear where this file came from and we have cluttered our static directory.

We should instead not include this svg in our source code and download it using curl if required.

The buildspec should instead run something like the following:

aws s3 sync static s3://samhstn.com --delete --exclude badge
CODEBUILD_BADGE_URL=$(\
  aws codebuild batch-get-projects \
    --name CodeBuild \
    --query "projects[0].badge.badgeRequestUrl" \
    --output text \
)
if aws s3 ls s3://samhstn.com/badge.svg || \
  [ $(\
    aws s3api head-object \
      --bucket samhstn.com \
      --key repo/samhstn/badge.svga \
      --query "Metadata.\"website-redirect-location\"" \
      --output text\
    ) != $CODEBUILD_BADGE ];then

  curl https://s3.amazonaws.com/codefactory-us-east-1-prod-default-build-badges/unknown.svg > ./badge.svg
  aws s3 cp ./badge.svg s3://samhstn.com/badge.svg --metadata "Website-Redirect-Location=$CODEBUILD_BADGE"

Fix codebuild badge

currently on clicking the Codebuild badge at the top of the README.md, we redirect to https://console.aws.amazon.com/codesuite/codebuild/projects/CodeBuild.

This no longer exists and should be https://console.aws.amazon.com/codesuite/codebuild/projects/CodeBuild due to the name change here: https://github.com/samhstn/samhstn/pull/60/files#diff-e41b13d89f238bc2b9c937098b983ac5R87

We should redirect to: https://console.aws.amazon.com/codesuite/codebuild/projects which doesn't depend on naming.

Cleanup S3 buckets

After deleting a branch, we should also delete the relevant codebuild buckets.

Configure email

It would be good to set up the @samhstn.com email addresses.

We don't want to be paying $4 per month with using WorkMail.

Instead, it would be best to opt for a cheaper setup using just lambda and s3.

  • Configure lambda function which logs email in cloudwatch
  • Configure lambda function which writes email to s3.
  • Configure lambda function to also send an email notification when an email arrives.

For now all emails can be read using the command line, or viewing them through the s3 console. We can worry about a more maintainable way to manage emails later.

Dynamic environments

The changes documented here: https://github.com/samhstn/samhstn/pull/69/files say:

The dev environment should be almost identical to the master environment

Instead of this dev environment, we should actually strive for an environment which is exactly the same as master and this environment is dynamically created every pull request.

This will require major rewrite, where each deployment will be the same (including master) and will be architected in the following way:

We will configure the following templates

root-iam (allows cross account to root account)
project-iam (allows admin access for project account)
acm (configured for all *.samhstn.com and samhstn.com domains)
(this will be followed by a manual DNS validation of acm certificate)
keys (our personal access tokens and secrets)
waf (our firewall to be referenced by every cloudfront distribution)
dci (contains the following templates):

  • webhook (listens to code changes from Github)
  • codepipeline (facilitates the running of our codebuild)
  • codebuild (creates/updates our branch codepipeline)

each push event will trigger

update of the above infrastructure if branch is master
create/update codebuild template
create/update codepipeline template
run our codepipeline which will:

  • create/update our s3 template
  • create/update our cloudfront template
  • create/update our route53 template as root user

This will configure the url: https://.samhstn.com

each pr delete event will trigger

delete route53 template as root user
delete cloudfront template
empty s3 bucket and delete s3 template
delete codepipeline template
delete codebuild template

Infra folder structure

|- README.md
|- test.Dockerfile
|- deploy.Dockerfile
|- root
   |- README.md
   |- email.yaml
   |- project-iam.yaml # admin user
   |- root-iam.yaml # cross account access for route53 configuration
   |- route53.yaml # record set to be used for each deployment
|- buildspecs
   |- infra.yaml # only run if branch is `master` - will update our infrastructure
   |- dci.yaml # creates/updates and triggers our push/codepipeline
   |- test.yaml
   |- deploy.yaml
|- base.yaml
|- base
   |- public-bucket-policy.yaml
   |- private-bucket-policy.yaml
   |- acm.yaml
   |- keys.yaml
   |- waf.yaml
   |- dci.yaml
   |- dci
      |- codebuild.yaml
      |- codepipeline.yaml
      |- webhook.yaml
|- push.yaml
|- push
   |- README.md
   |- codebuild.yaml
   |- codepipeline.yaml
   |- s3.yaml
   |- cloudfront.yaml

Add build badge

CI is now running on every branch build (as of #25), look to add a badge for this to the README.md

Add acm to cloudformation

We currently do this through the console, but there is no real reason why this shouldn't instead be a cloudformation template.

Steps to delete templates

We should add steps to be able to delete all templates.

Currently running aws cloudformation delete-stack --stack-name master-pipeline fails - for example.

There should be a nice series of bash commands which should nicely achieve this.

Lint the yaml files

There are some minor inconsistencies emerging in the infra/*.yml files, adding a yml linter would nip this in the bud.

Make better use of ssm and template parameters

We current are using ssm excessively and not making use of cloudformation parameters.

We should use ssm for values which won't change and cloudformation parameters for dynamic values.

We could also look to hardcode certain values (such as GithubOwner and GithubRepo).

We should consolidate every key that we are using and decide how it should be managed.

The following is one better way our parameters could be managed:

Hardcoded in template

  • GithubOwner - will be samhstn and unlikely to change.
  • GithubRepo - will be samhstn and unlikely to change.

Cloudformation parameter

  • GithubBranch - should be overridable, for example when deploying a dev branch.
  • Namespace - should be overridable, for example when deploying a dev stack.
  • DomainName - should be overridable when deploying to url dev.samhstn.com.

Ssm parameter

  • AcmCertArn - will rarely be updated, but should be configured once.
  • DockerfileDigest - see #50

Secrets manager parameter

  • GithubPAToken
  • GithubSecret

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.