This module can deploy a k8s cluster using kops as the underlining management tool. This module can preform rolling updates on a cluster, however not all parameters should be updates in this manner, in which case a classic blue/green switch should be preformed.(this modules uses terraform workspaces which facilitaties this workflow).This is experimental, but I am fairly comfortable with the behavior of the cluster creation (terraform apply -target=module.kops_cluster
) and destruction (terraform destroy -target=module.kops_cluster
) processes. Cluster updates however is very buggy and there are a few exceptions I have not quite fully understood so for now limit your infrastructure updates to upgrades only ( t2.micro -> t2.medium e.g) and to only variables ive tagged #UPDATEABLE
in the terraform.tfvars file. You can also produce a (cluster_name).yaml file by setting deploy_cluster = "false"
and dry_run = "true"
. Change dry_run = "false"
to generate terraform files for the cluster without applying. A tmp folder is created to house all the local_file resources created during execution of a module.The update process was adopted from kops via terraform.To get this working you'll need to create a file for the tiller rbac ({path_root}/tiller-rbac/rbac-config.yaml) which should be used to define the cluster role binding for tiller. It's purposely not defined in the pre-req. Creating YAML or Terraform files generates resources as this is in anticipation of the creation of the cluster, as such run
the destroy command to delete all cluster resources once done.
- Domain Name: This is k8s so you'll need your own domain name
- Dommain Certificate: Create a Domain Cert for the Domain Name you'll use for the cluster. Do this ahead of time as it takes sometime to authenticate. Request a Public Certificate
- Terraform S3 Bucket:Create an s3 bucket to house your terraform state files. Remote state backend
- Kubectl
- Helm
- Kops
- Terraform
- Certs: Used to create private and public keys.
- kops: Heart of all kops related actions
- networking: Currently only a vpc abrstraction
- s3: Bucket to hold kops state
All charts are base line configs nothing special.
- consul
- cluster-autoscaler
- envoy
- vault
- fluentd_elasticsearch
- k8s_dashboard
- metrics_server
- rook
Intialize Terraform
make init
Create Iam Kops resources
make create_user
Create YAML for Cluster
make create_yaml
Create Terraform For Cluster
make create_terraform
Create Cluster
make create_cluster (module=name)
Create Utilities Services
make create_utilities (module=name)
Updates Cluster
make update_cluster (module=kops)
Plan Cluster
make plan_cluster (module=name)
Plan Kops IAM
make plan_user
Plan Utilities
make plan_utilities (module=name)
Delete Cluster
make destroy_cluster (module=name)
Delete Kops IAM
make destroy_user
Delete Utilities
make destroy_utilities (module=name)
Login into cluster instance
Please Note:user will depending on image coreos used here
ssh core@{api or bastion}.{terraform.workspace}.{kops_cluster_name} -i {project.root}/keys/{keypair_name}.pem
BUG: Prevent cluster destroy from running when a command other than destroy is requestedSeems to be fixed will continue to observer behaviorRefactor kops_actions module.Every tpl should be its own moduleDONE :)- Integrate autoscaler and istio into the cluster creation process
Redesign the cluster creation process. yaml -> terraform -> Helm -> Istio ???DONE- Implement aws-iam-authenticator
- Complete Networking Abstraction from Auto-Gen Terraform code.
Enable Addon for cluster-autoscalerenabled for worker nodes. DONE- Test Updating remaining parameters
Determine how to include non Kops related k8s config options ( e.g maxSize or minSize) into the cluster creation process. Should this occur as an update immediately after cluster creation???Solved for kubelet flags will add for others as neededAdded Makefile :)Introduce additional k8s utilities services (envoy,vault)DONE- Further parameterize helm install utilities
- Resolve any associated bugs with updating the cluster
R/v pipelines for uncessessary coding for kops-clusterDONE- Documment k8s utilities
- Persistant volume integration
Move Consul to Helm ProviderDONE- Deploy a LM Collector within Cluster
- Create a Lambda function to backup and snapshot the etcd volumes for recovery
- create a DR proceedure
- Install end user apps and figure out DNS switching for blue/green deployments
Added RookDONE, SO you need to add ceph and set up proper distributed storage- Determine approach for standardizing liveness and readyness probes
Calico Deep DivePlayed around with setup network policies. Nothing like the realworld to test your skills :)- https://github.com/bitnami-labs/kubewatch
- Research Issue
kubernetes/kops#834 - This issues directly relates to ACM+ELB cluster creation.ISSUE resolved on kops master branch- kubernetes/kops#5414 - related pull request
- https://github.com/ramitsurana/awesome-kubernetes
- hashicorp/terraform#18026
- hashicorp/terraform#13549
- Interesting EKS vs KOPS chat: kubernetes/kops#5001
kubernetes/kops#5757 -relates to ACM+ELB cluster creation with terraformISSUE resolved on kops master branch