clowdhaus / eks-reference-architecture Goto Github PK
View Code? Open in Web Editor NEWReference EKS architectures using https://github.com/terraform-aws-modules/terraform-aws-eks
License: Apache License 2.0
Reference EKS architectures using https://github.com/terraform-aws-modules/terraform-aws-eks
License: Apache License 2.0
Terraform code fails during null_resource is getting executed. The problem is with shell substitution / Expansion of the below line:
kubectl --namespace kube-system delete deployment coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode).
│ Error: local-exec provisioner error
│
│ with null_resource.remove_default_coredns_deployment,
│ on helm-provisioner.tf line 16, in resource "null_resource" "remove_default_coredns_deployment":
│ 16: provisioner "local-exec" {
│
│ Error running command 'kubectl --namespace kube-system delete deployment
│ coredns --kubeconfig <(echo $KUBECONFIG | base64 -d)
: The systematus 1. Output: error: CreateFile /proc/self/fd/63
│ cannot find the path specified.
It could be nice if this command could modified.
BR,
Sandy
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
.github/workflows/pre-commit.yml
actions/checkout v4
clowdhaus/terraform-composite-actions v1.9.0
actions/checkout v4
clowdhaus/terraform-min-max v1.3.1
clowdhaus/terraform-composite-actions v1.9.0
clowdhaus/terraform-composite-actions v1.9.0
actions/checkout v4
clowdhaus/terraform-min-max v1.3.1
clowdhaus/terraform-composite-actions v1.9.0
cluster-autoscaler/add-ons.tf
aws-ia/eks-blueprints-addons/aws ~> 1.16
cluster-autoscaler/eks.tf
terraform-aws-modules/eks/aws ~> 20.5
cluster-autoscaler/main.tf
aws >= 5.40
helm >= 2.7
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
cluster-autoscaler/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
does-not-work/eks-mng-ssm-param/eks.tf
terraform-aws-modules/eks/aws ~> 20.0
does-not-work/eks-mng-ssm-param/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
does-not-work/eks-mng-ssm-param/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
eks-managed-node-group/eks_al2.tf
terraform-aws-modules/eks/aws ~> 20.0
eks-managed-node-group/eks_bottlerocket.tf
terraform-aws-modules/eks/aws ~> 20.0
eks-managed-node-group/eks_default.tf
terraform-aws-modules/eks/aws ~> 20.0
eks-managed-node-group/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
eks-managed-node-group/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
eks-mng-gpu/eks.tf
nvidia-device-plugin 0.15.0
terraform-aws-modules/eks/aws ~> 20.0
eks-mng-gpu/main.tf
aws >= 5.40
helm >= 2.7
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
eks-mng-gpu/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
ephemeral-vol-test/eks.tf
terraform-aws-modules/iam/aws ~> 5.20
terraform-aws-modules/eks/aws ~> 20.0
ephemeral-vol-test/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
ephemeral-vol-test/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
inferentia/add-ons.tf
aws-ia/eks-blueprints-addons/aws ~> 1.16
inferentia/eks.tf
terraform-aws-modules/eks/aws ~> 20.4
inferentia/main.tf
aws >= 5.40
helm >= 2.7
http >= 3.3
kubectl >= 2.0
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
inferentia/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
ipv4-prefix-delegation/eks.tf
terraform-aws-modules/eks/aws ~> 20.0
ipv4-prefix-delegation/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
ipv4-prefix-delegation/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
ipvs/eks.tf
terraform-aws-modules/eks/aws ~> 20.0
ipvs/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
ipvs/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
karpenter-gpu/add-ons.tf
aws-ia/eks-blueprints-addons/aws ~> 1.0
karpenter-gpu/eks.tf
terraform-aws-modules/iam/aws ~> 5.20
terraform-aws-modules/eks/aws ~> 19.15
karpenter-gpu/main.tf
aws >= 5.40
helm >= 2.6
kubectl >= 2.0.0
kubernetes >= 2.20
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
karpenter-gpu/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
karpenter/alb_controller.tf
aws-load-balancer-controller 1.8.1
terraform-aws-modules/iam/aws ~> 5.20
karpenter/eks.tf
terraform-aws-modules/eks/aws ~> 20.0
karpenter/karpenter.tf
karpenter 0.37.0
terraform-aws-modules/eks/aws ~> 20.0
karpenter/main.tf
aws >= 5.40
helm ~> 2.6
kubectl >= 2.0
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
karpenter/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
private/eks.tf
terraform-aws-modules/kms/aws ~> 3.0
terraform-aws-modules/eks/aws ~> 20.0
private/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
private/vpc.tf
terraform-aws-modules/security-group/aws ~> 5.0
terraform-aws-modules/vpc/aws ~> 5.0
terraform-aws-modules/vpc/aws ~> 5.0
self-managed-node-group/eks_default.tf
terraform-aws-modules/eks/aws ~> 20.0
self-managed-node-group/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
self-managed-node-group/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
serverless/eks.tf
terraform-aws-modules/eks/aws ~> 20.0
serverless/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
serverless/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
weaviate/add-ons.tf
aws-ia/eks-blueprints-addons/aws ~> 1.0
weaviate/eks.tf
terraform-aws-modules/iam/aws ~> 5.20
terraform-aws-modules/eks/aws ~> 20.0
weaviate/main.tf
aws >= 5.40
helm ~> 2.10
kubernetes >= 2.20
null >= 3.0
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
weaviate/sagemaker.tf
terraform-aws-modules/s3-bucket/aws ~> 4.0
terraform-aws-modules/security-group/aws ~> 5.0
weaviate/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
terraform-aws-modules/vpc/aws ~> 5.0
windows/eks.tf
terraform-aws-modules/eks/aws ~> 20.0
windows/main.tf
aws >= 5.40
hashicorp/terraform >= 1.3.2
clowdhaus/tags/aws ~> 1.0
windows/vpc.tf
terraform-aws-modules/vpc/aws ~> 5.0
Hi
@bryantbiggs I stumbled across your private cluster sample not so far ago and I was able to use it as a reference point with great effect.
As such, thanks you for your effort of sharing knowledge on a AWS/Terraform field.
And keep going !
Describe the bug
Documentation for eks-managed-node-group example is incomplete and/or incorrect.
To Reproduce
visit
a) https://github.com/clowdhaus/eks-reference-architecture/blob/main/README.md
b) https://github.com/clowdhaus/eks-reference-architecture/blob/main/eks-managed-node-group/README.md
Expected behavior
a) should mention the example
b) talks about Karpenter (looks like a copy/paste/modify where the modify was never finished)
Thanks for the hint regarding vpc-cni before nodes for proper prefix delegation
If you find that your nodes are not being created with the correct number of max pods (i.e. - for m5.large, if you are seeing a max pods of 29 instead of 110), most likely the vpc-cni was not configured before the EC2 instances.
Describe the bug
I followed this example and I am stuck with the following status:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m59s fargate-scheduler Your AWS account is currently blocked and thus cannot launch any Fargate pods
To Reproduce
Steps to reproduce the behavior:
serverless
tutorialCode
terraform {
required_version = "~> 1.2.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.21.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~>2.12.0"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.5" # "~>2.6.0"
}
null = {
source = "hashicorp/null"
version = ">= 3.0" # "~>3.1.0"
}
}
}
provider "aws" {
profile = "syncifyEKS-terraform-admin"
region = local.region
default_tags {
tags = {
Environment = "Staging"
Owner = "BT-Compliance"
Terraform = "True"
}
}
}
#
# Housekeeping
#
locals {
project_name = "syncify-dev"
cluster_name = "${local.project_name}-eks-cluster"
cluster_version = "1.22"
region = "us-west-1"
}
/*
The following 2 data resources are used get around the fact that we have to wait
for the EKS cluster to be initialised before we can attempt to authenticate.
*/
data "aws_eks_cluster" "default" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "default" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.default.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.default.token
}
}
#############################################################################################
#############################################################################################
# Create EKS Cluster
#############################################################################################
#############################################################################################
# Create VPC for EKS Cluster
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.2"
name = local.cluster_name
cidr = "10.0.0.0/16"
azs = ["${local.region}a", "${local.region}b", "${local.region}c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"] #, "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"] #, "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
manage_default_network_acl = true
default_network_acl_tags = { Name = "${local.cluster_name}-default" }
manage_default_route_table = true
default_route_table_tags = { Name = "${local.cluster_name}-default" }
manage_default_security_group = true
default_security_group_tags = { Name = "${local.cluster_name}-default" }
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.26.3"
cluster_name = local.cluster_name
cluster_version = local.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_addons = {
kube-proxy = {
addon_version = data.aws_eks_addon_version.this["kube-proxy"].version
resolve_conflicts = "OVERWRITE"
}
vpc-cni = {
addon_version = data.aws_eks_addon_version.this["vpc-cni"].version
resolve_conflicts = "OVERWRITE"
}
}
# manage_aws_auth_configmap = true
fargate_profiles = {
default = {
name = "default"
selectors = [
{ namespace = "default" }
]
}
kube_system = {
name = "kube-system"
selectors = [
{ namespace = "kube-system" }
]
}
}
}
data "aws_eks_addon_version" "this" {
for_each = toset(["coredns", "kube-proxy", "vpc-cni"])
addon_name = each.value
kubernetes_version = module.eks.cluster_version
most_recent = true
}
################################################################################
# Modify EKS CoreDNS Deployment
################################################################################
data "aws_eks_cluster_auth" "this" {
name = module.eks.cluster_id
}
locals {
kubeconfig = yamlencode({
apiVersion = "v1"
kind = "Config"
current-context = "terraform"
clusters = [{
name = module.eks.cluster_id
cluster = {
certificate-authority-data = module.eks.cluster_certificate_authority_data
server = module.eks.cluster_endpoint
}
}]
contexts = [{
name = "terraform"
context = {
cluster = module.eks.cluster_id
user = "terraform"
}
}]
users = [{
name = "terraform"
user = {
token = data.aws_eks_cluster_auth.this.token
}
}]
})
}
# Separate resource so that this is only ever executed once
resource "null_resource" "remove_default_coredns_deployment" {
triggers = {}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(local.kubeconfig)
}
# We are removing the deployment provided by the EKS service and replacing it through the self-managed CoreDNS Helm addon
# However, we are maintaing the existing kube-dns service and annotating it for Helm to assume control
command = <<-EOT
kubectl --namespace kube-system delete deployment coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
EOT
}
}
resource "null_resource" "modify_kube_dns" {
triggers = {}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = base64encode(local.kubeconfig)
}
# We are maintaing the existing kube-dns service and annotating it for Helm to assume control
command = <<-EOT
echo "Setting implicit dependency on ${module.eks.fargate_profiles["kube_system"].fargate_profile_pod_execution_role_arn}"
kubectl --namespace kube-system annotate --overwrite service kube-dns meta.helm.sh/release-name=coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
kubectl --namespace kube-system annotate --overwrite service kube-dns meta.helm.sh/release-namespace=kube-system --kubeconfig <(echo $KUBECONFIG | base64 --decode)
kubectl --namespace kube-system label --overwrite service kube-dns app.kubernetes.io/managed-by=Helm --kubeconfig <(echo $KUBECONFIG | base64 --decode)
EOT
}
depends_on = [
null_resource.remove_default_coredns_deployment
]
}
################################################################################
# CoreDNS Helm Chart (self-managed)
################################################################################
resource "helm_release" "coredns" {
name = "coredns"
namespace = "kube-system"
create_namespace = false
description = "CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS Services"
chart = "coredns"
version = "1.19.4"
repository = "https://coredns.github.io/helm"
force_update = true
recreate_pods = true
# For EKS image repositories https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html
values = [
<<-EOT
image:
repository: 602401143452.dkr.ecr.us-west-1.amazonaws.com/eks/coredns
tag: ${data.aws_eks_addon_version.this["coredns"].version}
deployment:
name: coredns
annotations:
eks.amazonaws.com/compute-type: fargate
service:
name: kube-dns
annotations:
eks.amazonaws.com/compute-type: fargate
podAnnotations:
eks.amazonaws.com/compute-type: fargate
EOT
]
depends_on = [
null_resource.modify_kube_dns
]
}
Expected behavior
coredns pods should have gotten scheduled.
Any help would be greatly appreciated.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.