Giter Club home page Giter Club logo

amazon-eks-cdk-blue-green-cicd's Introduction

Building CI/CD with Blue/Green and Canary Deployments on EKS using CDK

In this workshop you'll learn building a CI/CD pipeline (AWS CodePipeline) to develop a web-based application, containerize it, and deploy it on a Amazon EKS cluster. You'll use the blue/green method to deploy application and review the switchover using Application Load Balancer (ALB) Target-groups. You will spawn this infrastructure using AWS Cloud Development Kit (CDK), enabling you to reproduce the environment when needed, in relatively fewer lines of code.

The hosting infrastructure consists of pods hosted on Blue and Green service on Kubernetes Worker Nodes, being accessed via an Application LoadBalancer. The Blue service represents the production environment accessed using the ALB DNS with http query (group=blue) whereas Green service represents a pre-production / test environment that is accessed using a different http query (group=green). The CodePipeline build stage uses CodeBuild to dockerize the application and post the images to Amazon ECR. In subsequent stages, the image is picked up and deployed on the Green service of the EKS. The Codepipeline workflow is then blocked at the approval stage, allowing the application in Green service to be tested. Once the application is confirmed to be working fine, the user can issue an approval at the Approval Stage and the application is then deployed on to the Blue Service.

The Blue/Green architecture diagrams are provided below:

dashboard

dashboard

dashboard

The CodePipeline would look like the below figure:

dashboard

dashboard

The current workshop is based upon this link and the CDK here is extended further to incorporate CodePipeline, Blue/Green Deployment on EKS with ALB. We will also use the weighted target-group to configure B/G Canary Deployment method. Note that currently CodeDeploy does not support deploying on EKS and thus we will instead use CodeBuild to run commands to deploy the Containers on Pods, spawn the EKS Ingress controller and Ingress resource that takes form of ALB. This workshop focuses on providing a simplistic method, though typical deployable model for production environments. Note that blue/green deployments can be achieved using AppMesh, Lambda, DNS based canary deployments too.

Procedure to follow:

Step1. Cloud9 and commands to run:

First launch a Cloud9 terminal and prepare it with following commands:

sudo yum install -y jq
export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')
echo "export ACCOUNT_ID=${ACCOUNT_ID}" | tee -a ~/.bash_profile
echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile
aws configure set default.region ${AWS_REGION}
aws configure get default.region

Ensure the Cloud9 is assigned a role of an administrator and from Cloud9 -> AWS Settings -> Credentials -> Disable the Temporary Credentials Now install kubectl package:

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl help

Prepare CDK prerequisite:

sudo yum install -y npm
npm install -g [email protected] --force
npm install -g typescript@latest

Run git clone on this repository from Cloud9:

git clone https://github.com/aws-samples/amazon-eks-cdk-blue-green-cicd.git amazon-eks-cicd-codebuild-eks-alb-bg

Once cloned, run the below commands:

cd amazon-eks-cicd-codebuild-eks-alb-bg

Note: For this workshop, we are using CDK version 1.30. If using the latest CDK version using "npm install -g aws-cdk" (without a version specification) then you would need to modify the EKS construct to include version number too.

git init
git add .
git commit -m "Initial Commit"
git status
git log

Now run the CDK steps as below:

cd cdk
cdk init
npm install
npm run build
cdk ls

Ensure the output is CdkStackEksALBBg

cdk synth
cdk bootstrap aws://$ACCOUNT_ID/$AWS_REGION
cdk deploy

You may be asked to confirm the creation of the roles and authorization before the CloudFormation is executed, for which, you can respond with a “Y”.

The infrastructure will take some time to be created, please wait until you see the Output of CloudFormation printed on the terminal. Until then, take time to review the CDK code in the below file: cdk/lib/cdk-stack.ts

You may also check and compare the CloudFormation Template created from this CDK stack: cdk/cdk.out/CdkStackEksALBBg.template.json

Step2: Configure EKS environment:

Once the cdk is deployed successfully, go to the CloudFormation Service, select the CdkStackALBEksBg stack and go to the outputs section to copy the value from the field "ClusterConfigCommand".

dashboard

Then paste this output into Cloud9 terminal to configure the EKS context. Ensure you can see 2 nodes listed in the output of :

kubectl get nodes

Now configure the EKS cluster with the deployment, service and ingress resource as ALB using following set of commands:

cd ../flask-docker-app/k8s
ls setup.sh
chmod +x setup.sh
chmod +x setup2.sh
INSTANCE_ROLE=$(aws cloudformation describe-stack-resources --stack-name CdkStackALBEksBg | jq .StackResources[].PhysicalResourceId | grep CdkStackALBEksBg-ClusterDefaultCapacityInstanceRol | tr -d '["\r\n]')
CLUSTER_NAME=$(aws cloudformation describe-stack-resources --stack-name CdkStackALBEksBg | jq '.StackResources[] | select(.ResourceType=="Custom::AWSCDK-EKS-Cluster").PhysicalResourceId' | tr -d '["\r\n]')
echo "INSTANCE_ROLE = " $INSTANCE_ROLE 
echo "CLUSTER_NAME = " $CLUSTER_NAME

Note: Before proceeding further, confirm to see that both the variables $INSTANCE_ROLE and $CLUSTER_NAME have values populated. IF not, please bring it to the attention of the workshop owner, possibly the IAM role naming convention may have changed with the version. Also, after EKS version 1.16 onwards, the k8 deploy API's using apps/v1beta1 is deprecated to apps/v1. The update has been made into the yaml files, however, if you are using an older version of EKS, you may need to modify this back.

./setup2.sh $AWS_REGION $INSTANCE_ROLE $CLUSTER_NAME

Step3: Modify the ALB Security Group:

Modify the Security Group (ControlPlaneSecurityGroup) for the newly spawned Application Load Balancer to add an incoming rule to allow http port 80 for the 0.0.0.0/0. Services -> EC2 -> Load Balancer -> Select the latest created ALB -> Click Description Tab -> Scroll down to locate the Security Group Edit this security group to add a new rule with following parameters: http, 80, 0.0.0.0/0

Additionally, from EKS version 1.17 onwards, you would also need to change the security-group for Worker Nodes Data Plane (InstanceSecurityGroup) by adding an incoming rule to allow http port 80 for the ControlPlaneSecurityGroup (ALB).

Now, check the newly created LoadBalancer and review the listener routing rules: Services -> EC2 -> Load Balancer -> Select the latest created ALB -> Click Listeners Tab -> View/Edit Rules You would see the below settings shown:

Check the Load Balancer Target-groups and ensure the healthy hosts have registered and health check is consistently passing as shown below:

dashboard

Check the healthy hosts count graph to ensure the hosts, containers are stable:

dashboard

Step4: Upload the Application to CodeCommit repo:

Now that the ALB is setup, complete the last part of the configuration by uploading the code to CodeCommit repository:

cd ../..
pwd => confirm your current directory is amazon-eks-cicd-codebuild-eks-alb-bg
git add flask-docker-app/k8s/alb-ingress-controller.yaml
git add flask-docker-app/k8s/flaskALBIngress_query.yaml
git add flask-docker-app/k8s/flaskALBIngress_query2.yaml
git add flask-docker-app/k8s/iam-policy.json
git commit -m "Updated files"
git remote add codecommit https://git-codecommit.$AWS_REGION.amazonaws.com/v1/repos/CdkStackALBEksBg-repo
git push -u codecommit master

This will push the last commit we carried out in our preparation section, which in turn will trigger the CodePipeline.

Review the Infrastructure:

Collect the DNS Name from the Load Balancer and access the homepage:

dashboard

Once the Application is pushed to the Repository, the CodePipeline will be triggered and CodeBuild will run the set of commands to dockerize the application and push it to the Amazon ECR repository. CodeBuild will in turn run the kubectl commands to create the Blue and Green services on the EKS clusters, if it does not exist. For the first time, it will pull the Flask demo Application from the Dockerhub and deploy it on Blue as well as Green service. It should say "Your Flask application is now running on a container in Amazon Web Services" as shown below:

dashboard

dashboard

Go to Services -> CodePipeline -> Pipelines -> CdkStackEksALBBg-[unique-string] Review the Stages and the CodeBuild Projects to understand the implementation. Once the Application is deployed on the Green service, access it as mentioned above: http://ALB-DNS-name/?group=green.

It is important to note that container exposes port 5000, whereas service exposes port 80 (for blue-service) OR 8080 (for green-service) which in turn is mapped to local host port on the EC2 worker node instance.

After testing is completed, go to the Approval Stage and Click Approve. This will trigger the CodePipeline to execute the next stage to run the "Swap and Deploy" stage where it swaps the mapping of target-group to the blue / green service.

dashboard

dashboard

Configuring for Canary Deployments:

Configure your ALB for Canary based deployments using below commands from your Cloud9 terminal:

cd /home/ec2-user/environment/amazon-eks-cicd-codebuild-eks-alb-bg/flask-docker-app/k8s
kubectl apply -f flaskALBIngress_query2.yaml

dashboard

To bring the ALB config to the non-canary configuration, run the below commands:

cd /home/ec2-user/environment/amazon-eks-cicd-codebuild-eks-alb-bg/flask-docker-app/k8s
kubectl apply -f flaskALBIngress_query.yaml

Cleanup

(a) Remove the EKS Services: First connect to the kubernetes cluster using command published as "ConfigCommand" under CloudFormation output. Run "kubectl get svc" to check if you can see the Blue and Green services. Then run the below commands to delete the services.

kubectl delete svc/flask-svc-alb-blue svc/flask-svc-alb-green -n flask-alb
kubectl delete deploy/flask-deploy-alb-blue deploy/flask-deploy-alb-green -n flask-alb
kubectl delete ingress alb-ingress -n flask-alb
kubectl delete deploy alb-ingress-controller -n kube-system

(b) Destroy the CDK Stack: Ensure you are in the cdk directory and then run the below command:

cdk destroy

(c ) Remove the individual created policies: Access IAM Service, then access Roles, then select the Worker Node Instance role (search by CdkStackALBEksBg-ClusterDefaultCapacityInstanceRol) and then remove the inline elb-policy Access IAM Service, then access Policies, then select “alb-ingress-controller” managed policy created for kubectl, then Select Policy Actions and Delete

Conclusion:

We built the CICD Pipeline using CDK to containerize and deploy a Python Flask based application using the Blue/Green Deployment method on Amazon EKS. We made the code change and saw it propagated through the CICD pipeline and deploy on Blue/Green service of EKS. We also configured and tested B/G Canary Deployment method.

Hope you enjoyed the workshop!

License

This library is licensed under the MIT-0 License. See the LICENSE file.

amazon-eks-cdk-blue-green-cicd's People

Contributors

amazon-auto avatar dependabot[bot] avatar matlaver avatar nvaidya1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-eks-cdk-blue-green-cicd's Issues

Error while setting context after cdk deploy

Greetings,
While using command to set context from CdkStackALBEksBg stack's output I get the error as
"W0525 09:45:28.801741 26717 loader.go:221] Config not found: /etc/kubernetes/admin.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?"

Any thoughts will be helpful,
Thanks.

A question about the region of this sample

Hi,thanks for sharing this sample to everyone, it is very nice and very useful to have a test to using cdk to deploy B/G on AWS EKS.
May i ask you a question about the using region?
Is this sample just recommend using in us-east-2(Ohio) or us-west-2(Oregon) currently, right?
As it will occur an error when i using in ap-east-1, like below:
Remote function error: UnrecognizedClientException: The security token included in the request is invalid.
It caused CdkStackALBEksBg create failed.
Could you give me some addvice for this error?Many thanks.

Failed CodePipeline step called 'BuildAndDeploy'

Hi everyone,

Following the workshop with the suggested settings, I end up in a failed CodePipeline step called BuildAndDeploy.

I see the following errors:


time="2021-09-30T05:28:08.981386139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: can't change directory to '/lib/modules': No such file or directory\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981603590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (overlay) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981635759Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2021-09-30T05:28:08.981925786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981967539Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2021-09-30T05:28:09.013623179Z" level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay
time="2021-09-30T05:28:09.013906126Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.mobyfailed to start daemon: error initializing graphdriver: driver not supported
[Container] 2021/09/30 05:29:11 Command did not exit successfully /usr/local/bin/entrypoint.sh exit status 1
[Container] 2021/09/30 05:29:11 Phase complete: PRE_BUILD State: FAILED
[Container] 2021/09/30 05:29:11 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: /usr/local/bin/entrypoint.sh. Reason: exit status 1


Full Build log for the CodeBuild project 'CdkStackALBEksBg':


[Container] 2021/09/30 05:28:02 Waiting for agent ping
[Container] 2021/09/30 05:28:04 Waiting for DOWNLOAD_SOURCE
[Container] 2021/09/30 05:28:05 Phase is DOWNLOAD_SOURCE
[Container] 2021/09/30 05:28:05 CODEBUILD_SRC_DIR=/codebuild/output/src627946808/src
[Container] 2021/09/30 05:28:05 YAML location is /codebuild/readonly/buildspec.yml
[Container] 2021/09/30 05:28:05 Processing environment variables
[Container] 2021/09/30 05:28:07 Moving to directory /codebuild/output/src627946808/src
[Container] 2021/09/30 05:28:07 Registering with agent
[Container] 2021/09/30 05:28:07 Phases found in YAML: 3
[Container] 2021/09/30 05:28:07  PRE_BUILD: 3 commands
[Container] 2021/09/30 05:28:07  BUILD: 4 commands
[Container] 2021/09/30 05:28:07  POST_BUILD: 9 commands
[Container] 2021/09/30 05:28:07 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2021/09/30 05:28:07 Phase context status code:  Message: 
[Container] 2021/09/30 05:28:07 Entering phase INSTALL
[Container] 2021/09/30 05:28:07 Phase complete: INSTALL State: SUCCEEDED
[Container] 2021/09/30 05:28:07 Phase context status code:  Message: 
[Container] 2021/09/30 05:28:07 Entering phase PRE_BUILD
[Container] 2021/09/30 05:28:07 Running command env
MAVEN_OPTS=-Dmaven.wagon.httpconnectionManager.maxPerRoute=2
CODEBUILD_LAST_EXIT=0
CODEBUILD_START_TIME=1632979655015
CODEBUILD_BMR_URL=https://CODEBUILD_AGENT:3000
CODEBUILD_SOURCE_VERSION=arn:aws:s3:::cdkstackalbeksbg-mypipelineartifactsbucket727923d-1ic2ebutkesie/CdkStackALBEksBg-MyP/Artifact_S/TJeJoWi
CODEBUILD_AGENT_ENDPOINT=http://127.0.0.1:7831
HOSTNAME=08cd47c841dc
CODEBUILD_BUILD_ID=CdkStackALBEksBg:25446bcb-6adc-4d14-afed-aa689fe451ba
CODEBUILD_KMS_KEY_ID=arn:aws:kms:ca-central-1:127582704763:key/563be6e1-9d74-4fa7-bc41-b06f70c4388e
SHLVL=4
HOME=/home/kubectl
OLDPWD=/codebuild/readonly
CODEBUILD_GOPATH=/codebuild/output/src627946808
CODEBUILD_CI=true
CODEBUILD_RESOLVED_SOURCE_VERSION=1eb8b616af0da18b74a2e531a8b4ca6886d89d4b
CODEBUILD_BUILD_NUMBER=2
CODEBUILD_BUILD_SUCCEEDING=1
CODEBUILD_BUILD_ARN=arn:aws:codebuild:ca-central-1:127582704763:build/CdkStackALBEksBg:25446bcb-6adc-4d14-afed-aa689fe451ba
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/86d2a991-caf1-40e7-a6e7-c3e33c459bff
AWS_EXECUTION_ENV=AWS_ECS_EC2
CODEBUILD_INITIATOR=codepipeline/CdkStackALBEksBg-MyPipelineAED38ECF-5Q0DFYOKGTGH
CLUSTER_NAME=Cluster9EE0221C-fd2a08240a684863b6a40c7ce3a217be
AWS_DEFAULT_REGION=ca-central-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/20b835ea-78af-4d80-9b08-387f67e356a9
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/20b835ea-78af-4d80-9b08-387f67e356a9
CODEBUILD_EXECUTION_ROLE_BUILD=true
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/codebuild/user/bin
CODEBUILD_LOG_PATH=25446bcb-6adc-4d14-afed-aa689fe451ba
KUBECONFIG=/home/kubectl/.kube/kubeconfig
CODEBUILD_BUILD_IMAGE=127582704763.dkr.ecr.ca-central-1.amazonaws.com/aws-cdk/assets:05b5fca160e9c31e9355fe725ddc7f3f5831da37fb288796ac46534af4f123dd
GOPATH=/codebuild/output/src627946808
CODEBUILD_BUILD_URL=https://ca-central-1.console.aws.amazon.com/codebuild/home?region=ca-central-1#/builds/CdkStackALBEksBg:25446bcb-6adc-4d14-afed-aa689fe451ba/view/new
AWS_REGION=ca-central-1
CODEBUILD_SRC_DIR=/codebuild/output/src627946808/src
CODEBUILD_PROJECT_UUID=0b6aeee7-155d-4f2f-b69f-b5b7522e67d9
CODEBUILD_CONTAINER_NAME=default
CODEBUILD_AUTH_TOKEN=07353edb-1cdf-4cb4-bfaa-96c72a29ecd5
PWD=/codebuild/output/src627946808/src
CODEBUILD_FE_REPORT_ENDPOINT=https://codebuild.ca-central-1.amazonaws.com/
ECR_REPO_URI=127582704763.dkr.ecr.ca-central-1.amazonaws.com/cdkstackalbeksbg-ecrrepobb83a592-hkcjhywp2meu

[Container] 2021/09/30 05:28:07 Running command export TAG=${CODEBUILD_RESOLVED_SOURCE_VERSION}

[Container] 2021/09/30 05:28:07 Running command /usr/local/bin/entrypoint.sh
found myself in AWS CodeBuild, starting dockerd...
time="2021-09-30T05:28:07.922774683Z" level=info msg="Starting up"
time="2021-09-30T05:28:07.923513675Z" level=warning msg="Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network." host="tcp://127.0.0.1:2375"
time="2021-09-30T05:28:07.923547874Z" level=warning msg="Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there!" host="tcp://127.0.0.1:2375"
time="2021-09-30T05:28:08.924537168Z" level=info msg="libcontainerd: started new containerd process" pid=77
time="2021-09-30T05:28:08.924596210Z" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2021-09-30T05:28:08.924607050Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2021-09-30T05:28:08.924638769Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
time="2021-09-30T05:28:08.924659756Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2021-09-30T05:28:08.952574160Z" level=info msg="starting containerd" revision=7eba5930496d9bbe375fdf71603e610ad737d2b2 version=v1.4.8
time="2021-09-30T05:28:08.980746919Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2021-09-30T05:28:08.980843604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981386139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: can't change directory to '/lib/modules': No such file or directory\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981409527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981603590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (overlay) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981619643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981635759Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2021-09-30T05:28:08.981643854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981690149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981806733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981925786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2021-09-30T05:28:08.981938133Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2021-09-30T05:28:08.981967539Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2021-09-30T05:28:08.981976666Z" level=info msg="metadata content store policy set" policy=shared
time="2021-09-30T05:28:08.989820736Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2021-09-30T05:28:08.989840690Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2021-09-30T05:28:08.989868713Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.989905552Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.989919601Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.989950927Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.989968253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.989983397Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.989994091Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.990005894Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.990015724Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2021-09-30T05:28:08.990120470Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2021-09-30T05:28:08.990187741Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2021-09-30T05:28:08.990474727Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2021-09-30T05:28:08.990500233Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2021-09-30T05:28:08.990530419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990540844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990550744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990561846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990571524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990586619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990610657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990623371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990632795Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2021-09-30T05:28:08.990778354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990793465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990803381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990812884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2021-09-30T05:28:08.990975559Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
time="2021-09-30T05:28:08.991026486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
time="2021-09-30T05:28:08.991074345Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
time="2021-09-30T05:28:08.991091010Z" level=info msg="containerd successfully booted in 0.039353s"
time="2021-09-30T05:28:09.005781998Z" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2021-09-30T05:28:09.005806495Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2021-09-30T05:28:09.005829474Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
time="2021-09-30T05:28:09.005845370Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2021-09-30T05:28:09.007036285Z" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2021-09-30T05:28:09.007047993Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2021-09-30T05:28:09.007065069Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
time="2021-09-30T05:28:09.007076300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2021-09-30T05:28:09.007715319Z" level=warning msg="[graphdriver] WARNING: the overlay storage-driver is deprecated, and will be removed in a future release"
time="2021-09-30T05:28:09.013623179Z" level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay
time="2021-09-30T05:28:09.013906126Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
time="2021-09-30T05:28:09.013925475Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
failed to start daemon: error initializing graphdriver: driver not supported
Timed out trying to connect to internal docker host.

[Container] 2021/09/30 05:29:11 Command did not exit successfully /usr/local/bin/entrypoint.sh exit status 1
[Container] 2021/09/30 05:29:11 Phase complete: PRE_BUILD State: FAILED
[Container] 2021/09/30 05:29:11 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: /usr/local/bin/entrypoint.sh. Reason: exit status 1

Am I missing something, or is there any piece needing update?

Environment:

SuperAdminRole:~/environment/amazon-eks-cicd-codebuild-eks-alb-bg (master) $ cdk version
1.124.0 (build 65761fe)
SuperAdminRole:~/environment/amazon-eks-cicd-codebuild-eks-alb-bg (master) $ aws --version
aws-cli/1.19.112 Python/2.7.18 Linux/4.14.246-187.474.amzn2.x86_64 botocore/1.20.112

Thank you,

CDK deployment failing

Encountered below Issue and stuck at cdk deploy. Is this a known issue?
ontrolPlaneSecurityGroupD274242C)
22/67 | 4:31:14 AM | CREATE_FAILED | AWS::CloudFormation::Stack | @aws-cdk--aws-eks.ClusterResourceProvider.NestedStack/@aws-cdk--aws-eks.ClusterResourceProvider.NestedStackResource (awscdkawseksClusterResourceProviderNestedStackawscdkawseksClusterResourceProviderNestedStackResource9827C454) Embedded stack arn:aws:cloudformation:us-west-1

Load balancer failed to register healthy targets.

Registered targets under both the target groups were showing unhealthy as request was getting timeout.
As a resolution, allow required ports in EC2 instance security group from load balancer SG to register the targets successfully and you will be able to see the website !!

Could not access Jenkins web interface with error "jenkins.model.InvalidBuildsDir"

Hi,

As I follow the workshop "Building CICD pipeline with Jenkins on Amazon ECS" (https://third-party-integration-codepipeline-eks-bluegreen.workshop.aws/en/).

During the "2. IaC Launch" step, I found error when I tried to access URL of Jenkins Web-Interface from CloudFormation Output after CloudFormation was deployed. The ELB shown "OutOfService" status. So, I changed security group of EC2 to allow to access port 8080 directly. The error message shown as below.

Error:
jenkins.model.InvalidBuildsDir: ${ITEM_ROOTDIR}/builds does not exist and
probably cannot be created
at jenkins.model.Jenkins.checkRawBuildsDir(Jenkins.java:3085)
at jenkins.model.Jenkins.loadConfig(Jenkins.java:3009)
Caused: java.io.IOException
...

I did research and found that the error came from Jenkins directory permission/owner on "/data/jobs" path which is created by extract from another github package (

"if [[ ! -d /data/jobs/ ]]; then cd /data; curl -LJO https://github.com/aws-samples/amazon-eks-cdk-blue-green-cicd/raw/master/cicd/jenkins-jobs-archive.tar.gz; tar -xzvf jenkins-jobs-archive.tar.gz; cd; ls /data/jobs/; else echo 'SKIPPING DOWNLOAD...'; fi\n"
). Btw, you have changed the permission at line 551 which is before the package is extracted ( ). Should you change the permission after line 553 to fix this?

I have manually changed the owner to "1000:1000" on my workshop and it works properly.

Does the public ECR repository "public.ecr.aws/p8v8e7e5/myartifacts:alpine-3.8" still exist?

I'm getting an error when pulling the image public.ecr.aws/p8v8e7e5/myartifacts:alpine-3.8. Here's the error message I get when running a docker pull command on the repository.

(base) admin:~/environment $ docker pull public.ecr.aws/p8v8e7e5/myartifacts:alpine-3.8
Error response from daemon: repository public.ecr.aws/p8v8e7e5/myartifacts not found: name unknown: The repository with name 'myartifacts' does not exist in the registry with id 'p8v8e7e5'

Correction required in setting environment variable (Step 2)

The command provided to set the INSTANCE_ROLE in environment variable needs corrections as it is not retrieving the instance role from cloudFormation created resources list and hence leading to failure of ingress pod.

Currently the command mentioned is
INSTANCE_ROLE=$(aws cloudformation describe-stack-resources --stack-name CdkStackALBEksBg | jq .StackResources[].PhysicalResourceId | grep CdkStackALBEksBg-ClusterDefaultCapacityInstanceRol | tr -d '["\r\n]')

Correct one to point to logical
INSTANCE_ROLE=$(aws cloudformation describe-stack-resources --stack-name CdkStackALBEksBg | jq .StackResources[].PhysicalResourceId | grep CdkStackALBEksBg-ClusterNodegroupDefaultCapacityNo | tr -d '["\r\n]')

Missing Kubernetes version

It seems like eks.Cluster module requires a KubernetesVersion. Adding the KubernetesVersion fixed it for me.
Previous:

 const cluster = new eks.Cluster(this, 'Cluster', {
      vpc,
      defaultCapacity: 2,
      mastersRole: clusterAdmin,
      outputClusterName: true,
    });

Suggested:

 const cluster = new eks.Cluster(this, 'Cluster', {
      vpc,
      version: eks.KubernetesVersion.V1_16,
      defaultCapacity: 2,
      mastersRole: clusterAdmin,
      outputClusterName: true,
    });

CDK Deployment Failing at "CdkStackALBEksBg-awscdkawseksClusterResourceProviderNestedStack"

Receiving the following error at the nested stack "CdkStackALBEksBg-awscdkawseksClusterResourceProviderNestedStack"

ProviderframeworkonTimeout0B47CA38 --> Resource handler returned message: "The runtime parameter of nodejs10.x is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (nodejs14.x) while creating or updating functions. (Service: Lambda, Status Code: 400, Request ID: 9e50dd6e-ca8c-4138-8507-e984cf46cc33, Extended Request ID: null)" (RequestToken: 526f9b43-d827-cd1f-8941-5f1aacacb555, HandlerErrorCode: InvalidRequest)

I'm not sure where the nested stack code is. Please advise where I can update this Lambda runtime?

Thank You!

Load Balancers are not coming up

I followed this website and worked on it , how ever, where it says that errors like:

and when i see the config file value:
the server url is something like this : https://B178A9BE7A266F1715BE1B6506E72C5A.gr7.us-west-2.eks.amazonaws.com
i dont know whagt is this gr7

getting credentials: exec: exit status 255
I0122 22:06:07.360209 7877 helpers.go:221] Connection error: Get https://B178A9BE7A266F1715BE1B6506E72C5A.gr7.us-west-2.eks.amazonaws.com/api?timeout=32s: getting credentials: exec: exit status 255

Getting this error on kubectl get nodes and also there are no LB spun up on AWS.

CDK deployment failing: repository does not exist in the registry

Hello all,

I've faced an error in Workshop on Step 6: cdk deploy
CdkStackALBEksBg stack failed with the following logs:

CdkStackALBEksBg: deploying...
[0%] start: Publishing 26ac61b4195cccf80ff73f332788ad7ffaab36d81ce570340a583a8364901665:current
[10%] success: Published 26ac61b4195cccf80ff73f332788ad7ffaab36d81ce570340a583a8364901665:current
[10%] start: Publishing 3144c5280df58e5cdbad2fdc0464c2dc114b48bdc883d5964c0f7cf329e191fa:current
[20%] success: Published 3144c5280df58e5cdbad2fdc0464c2dc114b48bdc883d5964c0f7cf329e191fa:current
[20%] start: Publishing c691172cdeefa2c91b5a2907f9d81118e47597634943344795f1a844192dd49c:current
[30%] success: Published c691172cdeefa2c91b5a2907f9d81118e47597634943344795f1a844192dd49c:current
[30%] start: Publishing 4129bbca38164ecb28fee8e5b674f0d05e5957b4b8ed97d9c950527b5cc4ce10:current
[40%] success: Published 4129bbca38164ecb28fee8e5b674f0d05e5957b4b8ed97d9c950527b5cc4ce10:current
[40%] start: Publishing e9882ab123687399f934da0d45effe675ecc8ce13b40cb946f3e1d6141fe8d68:current
[50%] success: Published e9882ab123687399f934da0d45effe675ecc8ce13b40cb946f3e1d6141fe8d68:current
[50%] start: Publishing ea17febe6d04c66048f3e8e060c71685c0cb53122abceff44842d27bc0d4a03e:current
[60%] success: Published ea17febe6d04c66048f3e8e060c71685c0cb53122abceff44842d27bc0d4a03e:current
[60%] start: Publishing 540c42bf93221d226e9892dca8f369ca800db0f340fb22015b9723dd093760e1:current
[70%] success: Published 540c42bf93221d226e9892dca8f369ca800db0f340fb22015b9723dd093760e1:current
[70%] start: Publishing 81cf0917f6b2b41f4037af7e72c7c75993aa9f419a182bc27d6e30e2d55bcf94:current
[80%] success: Published 81cf0917f6b2b41f4037af7e72c7c75993aa9f419a182bc27d6e30e2d55bcf94:current
[80%] start: Publishing 587eec5f7189189557b02f86a2f8dd746bb63d0b11a488ee97d629d2eeb72336:current
[90%] success: Published 587eec5f7189189557b02f86a2f8dd746bb63d0b11a488ee97d629d2eeb72336:current
[90%] start: Publishing f49a00e629119a89acb65cce47bbf091d7dab3ec26220bca4923d04c4d3f4cd2:current
Sending build context to Docker daemon   5.12kB
Step 1/10 : FROM public.ecr.aws/p8v8e7e5/myartifacts:alpine-jan2021
repository public.ecr.aws/p8v8e7e5/myartifacts not found: name unknown: The repository with name 'myartifacts' does not exist in the registry with id 'p8v8e7e5'
[100%] fail: docker build --tag cdkasset-f49a00e629119a89acb65cce47bbf091d7dab3ec26220bca4923d04c4d3f4cd2 . exited with error code 1: repository public.ecr.aws/p8v8e7e5/myartifacts not found: name unknown: The repository with name 'myartifacts' does not exist in the registry with id 'p8v8e7e5'

Environment:

cdk version - 1.134.0 (build dd5e12d)
aws version - aws-cli/1.19.112 Python/2.7.18 Linux/4.14.301-224.520.amzn2.x86_64 botocore/1.20.112
npm version - 6.14.17

Could you suggest if I missed something or public repository has been deleted recently?
Thank you.

Documentation updates needed

While working through the workshop, I identified the following issues that need to be updated in the documentation:

  • Step 2(b) in "Setting up Cloud9 IDE": New EC2 console introduced a necessary change. Attaching an IAM role moved to Actions > Security > Modify IAM role in the new EC2 console. Needs updated instructions and a new screenshot.
  • Step 1(c) in "Prepare and Download packages": Formatting error in cdk command. Use "cdk --version", not "cdk -version".
  • Step 1(d) in "Prepare and Download packages": There is no untar operation, but the documentation says to ignore errors in the untar command.
  • Step 1(e) in "Prepare and Download packages": There is no CodeCommit operation, so the CodeCommit access via http link is either not need or is premature.
  • Step 6 in "CDK Launch": The step says that you deploy the CDKToolkit to trigger the CloudFormation stack to execute. This isn't strictly true. The CDKToolskit stack is deployed when you run "cdk bootstrap", and it simply deploys an S3 bucket to hold assets related to cdk execution. The "cdk deploy" command actually launches 4 new CloudFormation stacks (starting with "CdkStackALBEksBg"), and it doesn't touch the CDKToolkit stack at all.
  • Step 1(a) in "Access EKS": The line that reads "Now under Load Balance -> Select Instances Tab" should be changed to "Now under Load Balancing -> Target groups, Select the Monitoring Tab".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.