Giter Club home page Giter Club logo

terraform-aws-minikube's Introduction

AWS Minikube Terraform module

AWS Minikube is a single node Kubernetes deployment in AWS. It creates EC2 host and deploys Kubernetes cluster using Kubeadm tool. It provides full integration with AWS. It is able to handle ELB load balancers, EBS disks, Route53 domains etc.

Updates

  • 29.4.2024 Update to Kubernetes 1.30.0
  • 31.3.2024 Update to Kubernetes 1.29.3 + Ingress and External DNS add-on updates
  • 18.2.2024 Update to Kubernetes 1.29.2 + Ingress add-on update
  • 30.12.2023 Update to Kubernetes 1.29.0
  • 26.11.2023 Update to Kubernetes 1.28.4
  • 12.11.2023 Update to Kubernetes 1.28.3 + Update some add-ons
  • 15.10.2023 Update to Kubernetes 1.28.2 + Update some add-ons
  • 16.4.2023 Update to Kubernetes 1.27.1 + Use external AWS Cloud Provider
  • 1.4.2023 Update to Kubernetes 1.26.3 + update add-ons (Ingress-NGINX Controller, External DNS, Metrics Server, AWS EBS CSI Driver)
  • 4.3.2023 Update to Kubernetes 1.26.2 + update add-ons (Ingress-NGINX Controller)
  • 22.1.2023 Update to Kubernetes 1.26.1 + update add-ons (External DNS)
  • 10.12.2022 Update to Kubernetes 1.26.0 + update add-ons (AWS EBS CSI Driver, Metrics server)
  • 13.11.2022 Update to Kubernetes 1.25.4 + update add-ons
  • 2.10.2022 Update to Kubernetes 1.25.2 + update add-ons
  • 26.8.2022 Update to Kubernetes 1.25.0 + Calico upgrade

Prerequisites and dependencies

  • AWS Minikube deploys into existing VPC / public subnet. If you don't have your VPC / subnet yet, you can use this configuration or this module to create one.
    • The VPC / subnet should be properly linked with Internet Gateway (IGW) and should have DNS and DHCP enabled.
    • Hosted DNS zone configured in Route53 (in case the zone is private you have to use IP address to copy kubeconfig and access the cluster).
  • To deploy AWS Minikube there are no other dependencies apart from Terraform. Kubeadm is used only on the EC2 host and doesn't have to be installed locally.

Including the module

Although it can be run on its own, the main value is that it can be included into another Terraform configuration.

module "minikube" {
  source = "github.com/scholzj/terraform-aws-minikube"

  aws_region    = "eu-central-1"
  cluster_name  = "my-minikube"
  aws_instance_type = "t2.medium"
  ssh_public_key = "~/.ssh/id_rsa.pub"
  aws_subnet_id = "subnet-8a3517f8"
  ami_image_id = "ami-b81dbfc5"
  hosted_zone = "my-domain.com"
  hosted_zone_private = false

  tags = {
    Application = "Minikube"
  }

  addons = [
    "https://raw.githubusercontent.com/scholzj/terraform-aws-minikube/master/addons/storage-class.yaml",
    "https://raw.githubusercontent.com/scholzj/terraform-aws-minikube/master/addons/heapster.yaml",
    "https://raw.githubusercontent.com/scholzj/terraform-aws-minikube/master/addons/dashboard.yaml",
    "https://raw.githubusercontent.com/scholzj/terraform-aws-minikube/master/addons/external-dns.yaml"
  ]
}

An example of how to include this can be found in the examples dir.

Using custom AMI Image

AWS Minikube is built and tested on CentOS 7. But gives you the possibility to use their own AMI images. Your custom AMI image should be based on RPM distribution and should be similar to Cent OS 7. When ami_image_id variable is not specified, the latest available CentOS 7 image will be used.

Add-ons

Currently, following add-ons are supported:

  • Kubernetes dashboard
  • Heapster for resource monitoring
  • Storage class and CSI driver for automatic provisioning of persistent volumes
  • External DNS (Replaces Route53 mapper)
  • Ingress

The add-ons will be installed automatically based on the Terraform variables.

Custom add-ons

Custom add-ons can be added if needed. Fro every URL in the addons list, the initialization scripts will automatically call kubectl -f apply <Addon URL> to deploy it. Minikube is using RBAC. So the custom add-ons have to be RBAC ready.

Tagging

If you need to tag resources created by your Kubernetes cluster (EBS volumes, ELB load balancers etc.) check this AWS Lambda function which can do the tagging.

Kubernetes version

The intent for this module is to use it for development and testing against the latest version of Kubernetes. As such, the primary goal for this module is to ensure it works with whatever is the latest version of Kubernetes supported by Minikube. This includes provisioning the cluster as well as setting up networking and any of the supported add-ons. This module might, but is not guaranteed to, also work with other versions of Kubernetes. At your own discretion, you can use the kubernetes_version variable to specify a different version of Kubernetes for the Minikube cluster.

terraform-aws-minikube's People

Contributors

mludvig avatar scholzj avatar st3v avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

terraform-aws-minikube's Issues

ssh_public_key is not working with .pem file

Hi

Thanks for sharing the module.
I am using the .pem file on my windows desktop for the below parameter, but it is not working.

ssh_public_key = "~/my-minikube.pem"

Is it right approach to use this module or any other way, can you please share the details.

Thank you

Existing aws_key_pair cannot be used

While the option ssh_public_key works in many cases, sometimes it makes sense for the key to come from data.aws_key_pair, resource.aws_key_pair, or be hard-coded by name.

Can an option be added (key_name) that would allow users to have their own aws_key_pair outside of this module instead of using the internal resource "aws_key_pair" "minikube_keypair"?

Kubernetes don't completely start up after reboot

Hi, thanks for providing this template! It works great right after deployment but k8s seems to be half-broken after a simple instance reboot. I tried a couple of deployments and it's perfectly reproducible.

After deployment (before reboot)

There are 20 containers running

root@ip-172-30-2-247 ~ # docker ps
CONTAINER ID     IMAGE                            COMMAND                  CREATED          STATUS           PORTS        NAMES
72772b50ff39     eb516548c180                     "/coredns -conf /e..."   4 minutes ago    Up 4 minutes                  k8s_coredns_coredns-fb8b8dccf-46ttk_kube-system_495e0b66-87f1-11e9-af26-0a1ee8c58078_0
74071e694ec0     eb516548c180                     "/coredns -conf /e..."   4 minutes ago    Up 4 minutes                  k8s_coredns_coredns-fb8b8dccf-b9mhj_kube-system_4963984e-87f1-11e9-af26-0a1ee8c58078_0
3ae3ff9e9c73     k8s.gcr.io/pause:3.1             "/pause"                 4 minutes ago    Up 4 minutes                  k8s_POD_coredns-fb8b8dccf-b9mhj_kube-system_4963984e-87f1-11e9-af26-0a1ee8c58078_43
df8b0eef7ec3     k8s.gcr.io/pause:3.1             "/pause"                 4 minutes ago    Up 4 minutes                  k8s_POD_coredns-fb8b8dccf-46ttk_kube-system_495e0b66-87f1-11e9-af26-0a1ee8c58078_41
623c4cc86d76     b4d7c4247c3a                     "start_runit"            4 minutes ago    Up 4 minutes                  k8s_calico-node_calico-node-8rckj_kube-system_4958e121-87f1-11e9-af26-0a1ee8c58078_2
8106f5622be7     0bd1f99c7034                     "/usr/bin/kube-con..."   4 minutes ago    Up 4 minutes                  k8s_calico-kube-controllers_calico-kube-controllers-8649d847c4-29xss_kube-system_495de6cf-87f1-11e9-af26-0a1ee8c5807
7d70242f82fa     quay.io/coreos/etcd@sha256:...   "/usr/local/bin/et..."   4 minutes ago    Up 4 minutes                  k8s_calico-etcd_calico-etcd-qzbks_kube-system_55247f87-87f1-11e9-af26-0a1ee8c58078_0
49475b87dfe6     k8s.gcr.io/pause:3.1             "/pause"                 4 minutes ago    Up 4 minutes                  k8s_POD_calico-etcd-qzbks_kube-system_55247f87-87f1-11e9-af26-0a1ee8c58078_0
ceb5a5d082bb     k8s.gcr.io/pause:3.1             "/pause"                 4 minutes ago    Up 4 minutes                  k8s_POD_calico-kube-controllers-8649d847c4-29xss_kube-system_495de6cf-87f1-11e9-af26-0a1ee8c58078_0
9529052099c3     20a2d7035165                     "/usr/local/bin/ku..."   5 minutes ago    Up 5 minutes                  k8s_kube-proxy_kube-proxy-fx548_kube-system_4958d277-87f1-11e9-af26-0a1ee8c58078_0
aea199144e04     k8s.gcr.io/pause:3.1             "/pause"                 5 minutes ago    Up 5 minutes                  k8s_POD_calico-node-8rckj_kube-system_4958e121-87f1-11e9-af26-0a1ee8c58078_0
e9f5fe880890     k8s.gcr.io/pause:3.1             "/pause"                 5 minutes ago    Up 5 minutes                  k8s_POD_kube-proxy-fx548_kube-system_4958d277-87f1-11e9-af26-0a1ee8c58078_0
a40f97c206e8     2c4adeb21b4f                     "etcd --advertise-..."   5 minutes ago    Up 5 minutes                  k8s_etcd_etcd-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_cd3d6cd87a522a8d47f9f84a29a21085_0
f07f1230fcbe     8931473d5bdb                     "kube-scheduler --..."   5 minutes ago    Up 5 minutes                  k8s_kube-scheduler_kube-scheduler-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_f44110a0ca540009109bfc
a8bb045ae863     cfaa4ad74c37                     "kube-apiserver --..."   5 minutes ago    Up 5 minutes                  k8s_kube-apiserver_kube-apiserver-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_5d623fb4138e843edbe51b8557fa2cac2c     efb3887b411d                     "kube-controller-m..."   5 minutes ago    Up 5 minutes                  k8s_kube-controller-manager_kube-controller-manager-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_4d4b
cec3e482bb98     k8s.gcr.io/pause:3.1             "/pause"                 5 minutes ago    Up 5 minutes                  k8s_POD_kube-controller-manager-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_4d4b59c11383339b1dbc695725a629730365     k8s.gcr.io/pause:3.1             "/pause"                 5 minutes ago    Up 5 minutes                  k8s_POD_kube-scheduler-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_f44110a0ca540009109bfc32a7eb0baa_
25618cb3d3b5     k8s.gcr.io/pause:3.1             "/pause"                 5 minutes ago    Up 5 minutes                  k8s_POD_etcd-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_cd3d6cd87a522a8d47f9f84a29a21085_0
a45912a2e9b0     k8s.gcr.io/pause:3.1             "/pause"                 5 minutes ago    Up 5 minutes                  k8s_POD_kube-apiserver-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_5d623fb4138e843edbe51bb363cb7fdc_

And kubectl works:

root@ip-172-30-2-247 ~ # kubectl --kubeconfig /etc/kubernetes/admin.conf get all --all-namespaces
NAMESPACE     NAME                                                                          READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-etcd-qzbks                                                         1/1     Running   0          2m30s
kube-system   pod/calico-kube-controllers-8649d847c4-29xss                                  1/1     Running   1          2m50s
kube-system   pod/calico-node-8rckj                                                         1/1     Running   2          2m50s
kube-system   pod/coredns-fb8b8dccf-46ttk                                                   1/1     Running   0          2m50s
kube-system   pod/coredns-fb8b8dccf-b9mhj                                                   1/1     Running   0          2m50s
kube-system   pod/etcd-ip-172-30-2-247.ap-southeast-2.compute.internal                      1/1     Running   0          117s
kube-system   pod/kube-apiserver-ip-172-30-2-247.ap-southeast-2.compute.internal            1/1     Running   0          101s
kube-system   pod/kube-controller-manager-ip-172-30-2-247.ap-southeast-2.compute.internal   1/1     Running   0          2m1s
kube-system   pod/kube-proxy-fx548                                                          1/1     Running   0          2m50s
kube-system   pod/kube-scheduler-ip-172-30-2-247.ap-southeast-2.compute.internal            1/1     Running   0          106s

NAMESPACE     NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP                  2m57s
kube-system   service/calico-etcd   ClusterIP   10.96.232.136   <none>        6666/TCP                 2m55s
kube-system   service/kube-dns      ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   2m56s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
kube-system   daemonset.apps/calico-etcd   1         1         1       1            1           <none>                        2m55s
kube-system   daemonset.apps/calico-node   1         1         1       1            1           beta.kubernetes.io/os=linux   2m55s
kube-system   daemonset.apps/kube-proxy    1         1         1       1            1           <none>                        2m55s

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           2m55s
kube-system   deployment.apps/coredns                   2/2     2            2           2m56s

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-8649d847c4   1         1         1       2m50s
kube-system   replicaset.apps/coredns-fb8b8dccf                    2         2         2       2m50s

After reboot

Only 6 containers come up:

root@ip-172-30-2-247 ~ # docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
ae9eadd98658        cfaa4ad74c37           "kube-apiserver --..."   19 seconds ago      Up 19 seconds                           k8s_kube-apiserver_kube-apiserver-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_5d623fb4138e843edbe51bb363cb7fdc_7
5fa7ba84ad9e        efb3887b411d           "kube-controller-m..."   13 minutes ago      Up 13 minutes                           k8s_kube-controller-manager_kube-controller-manager-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_4d4b59c11383339b1dbc6957db2b1aac_1
ef7970cc7a31        8931473d5bdb           "kube-scheduler --..."   13 minutes ago      Up 13 minutes                           k8s_kube-scheduler_kube-scheduler-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_f44110a0ca540009109bfc32a7eb0baa_1
517eb8dc2228        k8s.gcr.io/pause:3.1   "/pause"                 13 minutes ago      Up 13 minutes                           k8s_POD_kube-controller-manager-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_4d4b59c11383339b1dbc6957db2b1aac_1
e321d41d616f        k8s.gcr.io/pause:3.1   "/pause"                 13 minutes ago      Up 13 minutes                           k8s_POD_kube-apiserver-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_5d623fb4138e843edbe51bb363cb7fdc_1
76d753980f61        k8s.gcr.io/pause:3.1   "/pause"                 13 minutes ago      Up 13 minutes                           k8s_POD_kube-scheduler-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_f44110a0ca540009109bfc32a7eb0baa_1
009531cebfa0        k8s.gcr.io/pause:3.1   "/pause"                 13 minutes ago      Up 13 minutes                           k8s_POD_etcd-ip-172-30-2-247.ap-southeast-2.compute.internal_kube-system_cd3d6cd87a522a8d47f9f84a29a21085_1

And kubectl doesn't work:

root@ip-172-30-2-247 ~ # kubectl --kubeconfig /etc/kubernetes/admin.conf get all
The connection to the server 172.30.2.247:6443 was refused - did you specify the right host or port?

I gave it more than enough time to come up but still no go. It looks like the issue is with the kube-apiserver that keeps starting and failing over and over again.

Unfortunately I'm not a big kubernetes expert so I don't know where to look to fix it.

Any chance you could have a look at it?

Thanks!

Michael

Requires a EULA Agreement

Running this module I get the following:

Error: creating EC2 Instance: OptInRequired: In order to use this AWS Marketplace product you need to accept terms and subscribe. To do so please visit https://aws.amazon.com/marketplace/pp?sku=cvugziknvmxgqna9noibqnnsy
│ 	status code: 401, request id: 6784aa53-790a-470c-b3a8-28ebd23a433c
│
│   with module.minikube.aws_instance.minikube,
│   on .terraform/modules/minikube/main.tf line 155, in resource "aws_instance" "minikube":
│  155: resource "aws_instance" "minikube" {

Is there any alternative supported image that does not require agreeing to a EULA?

After reboot of instance pods are stuck in terminating

Thanks so much for your work on this module, it is in active use at our startup in the creation of ephemeral development environments.

We have our instances set up to restart on a schedule. I notice that after they start back up when we then go to destroy the instance via Terraform the pods are stuck in terminating.

It appears that containerd is running into this issue containerd/containerd#3667 which is specific to Centos 7. The resolution is to toggle on /proc/sys/fs/may_detach_mounts on startup.

The code I am using to do this is the following:

init-aws-minikube.sh

########################################
########################################
# Set may detach mounts on startup for
# clean termination
########################################
########################################
cat <<EOF | tee /tmp/enable-may-detach-mounts.sh
#!/bin/bash
sudo /bin/sh -c 'echo 1 > /proc/sys/fs/may_detach_mounts'
EOF

chmod +x /tmp/enable-may-detach-mounts.sh

cat <<EOF | tee /etc/systemd/system/detachmounts.service
[Unit]
Description=Set detach mounts to 1
After=network.target
[Service]
Type=simple
ExecStart=/tmp/enable-may-detach-mounts.sh
TimeoutStartSec=0
[Install]
WantedBy=default.target
EOF

systemctl daemon-reload
systemctl enable detachmounts.service
systemctl start detachmounts.service

I'd be happy to contribute a PR to add this if it looks like this is an adequate solution.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.