Giter Club home page Giter Club logo

kubernetes-on-openstack's Introduction

This repositroy is only in maintenance mode and will be archived in the near future

We move this repositroy in maintenance mode for the following reasons:

  • This was a MVP for an easy Kubernetes installation based on community tools like kubeadm and terraform
  • The OpenStack integration is still brittle (e.g. SecurityGroups are not created in a reliable fasion)
  • Critical features like updates are missing
  • The setup is very opinionated (but simple)
  • There are other tools to bootstrap a Kubernetes cluster on OpenStack (even if these tools are often more complex)

Kubernetes on OpenStack

For an alternative take a look at kube-spray, kubeone or one of the other Kubernetes boostraping tools.

TLDR: This repositroy deploys an opinionated Kubernetes cluster on OpenStack with kubeadm and terraform.

Using the module

After cloning or downloading the repository, follow these steps to get your cluster up and running.

Customize settings

Take a look at the example provided in the example folder. It contains three files: main.tf, provider.tf, and variables.tf. Have a look at main.tf. Customize settings like master_data_volume_size or node_data_volume_size to your needs, you might have to stay below quotas set by your OpenStack admin. Pick an instance flavor that has at least two vCPUs, otherwise kubeadm will fail during its pre-flight check.

We assume example to be your working directory for all following commands.

Install keystone auth plugin

The Kubernetes cluster will use Keystone authentication (over a WebHook). For mor details look at the official docs or just use the quick start:

VERSION=1.13.1
OS=$(uname | tr '[:upper:]' '[:lower:]')
curl -sLO "https://github.com/kubernetes/cloud-provider-openstack/releases/download/${VERSION}/cloud-provider-openstack-${VERSION}-${OS}-amd64.tar.gz"
tar xfz cloud-provider-openstack-${VERSION}-${OS}-amd64.tar.gz
rm cloud-provider-openstack-${VERSION}-${OS}-amd64.tar.gz

mkdir $(pwd)/bin
cp ${OS}-amd64/client-keystone-auth $(pwd)/bin/
rm -rf ${OS}-amd64

Reference the module correctly

As long as you keep the example folder inside the module repository, the reference source = "../" in the main.tf works. For a cleaner setup, you can also extract the example folder and put it somewhere else, just make sure you change the source setting accordingly. You can also reference the GitHub repository itself like so:

   module "my_cluster" {
     source = "git::https://github.com/inovex/kubernetes-on-openstack.git?ref=v1.0.0"

     # ...
   }

If you do it that way, make sure to

terraform get --update

before running any other terraform commands.

Set your credentials

There a multiple different ways to authenticate with your OpenStack provider that all have their pros and cons. If you want to know more, check out this blog post about OpenStack credential handling for terraform. You can choose any of them, as long as you make sure the terraform variables auth_url, username and password are set explicitly as terraform variables. This is required as they are passed down to the Openstack Cloud Controller running inside the provisioned Kubernetes. Those should be dedicated service account credentials in a team setup. The easiest way to get started is to create a terraform.tfvars file in the example folder. If you start working in a team setup, you might want to check out the method using clouds-public.yaml, clouds.yaml and secure.yaml files in the aforementioned blog post.

Execute terraform

Initialize the folder and run plan:

terraform init
terraform plan

Now you can create the cluster by running

terraform apply

It takes some time for the nodes to be fully configured. After running terraform apply there will be a kubeconfig file configured for the newly created cluster. The --insecure-skip-tls-verify=true in there is needed because we use the auto-generated certificates of kubeadm. There are possible workarounds to remove the flag (e.g. fetch the CA from the Kubernetes master, see below). Keep in mind: As a default all users in the (OpenStack) project will have cluster-admin rights. You can access the cluster via

kubectl --kubeconfig kubeconfig get nodes

It is also possible to set the KUBECONFIG environment variable to reference the location of the kubeconfig file created by terraform or to copy its contents to your .kube settings but keep in mind that the kubeconfig changes often because of Floating IPs.

Test the OpenStack integration

To create a simple deployment, run

kubectl --kubeconfig kubeconfig create deployment nginx --image=nginx
kubectl --kubeconfig kubeconfig expose deployment nginx --port=80

Access nodes

In the current setup the master node can be reached by

ssh ubuntu@<ip>

and can also be used as jumphost to access the worker nodes:

ssh -J ubuntu@<ip> ubuntu@node-0

Fetch cluster CA

In order to prevent to use insecure-skip-tls-verify=true you can fetch the cluster CA:

export MASTER_IP=""
export CLUSTER_CA=$(curl -sk "https://${MASTER_IP}:6443/api/v1/namespaces/kube-public/configmaps/cluster-info" | jq -r '.data.kubeconfig' | grep -o 'certificate-authority-data:.*' | awk '{print $2}')
# ${CLUSTER_NAME} must match the name provided in the terraform.tfvars
export CLUSTER_NAME=""

kubectl --kubeconfig ./kubeconfig config set clusters.${CLUSTER_NAME}.certificate-authority-data ${CLUSTER_CA}
kubectl --kubeconfig ./kubeconfig config set clusters.${CLUSTER_NAME}.insecure-skip-tls-verify false

unset CLUSTER_CA
unset MASTER_IP
unset CLUSTER_NAME

Automatically deployed components

Shared environments

Currently blocked

In order to create a shared Kubernetes cluster for multiple users we can use application credentials

openstack --os-cloud <cloud> --os-project-id=<project-id> application credential create --restricted kubernetes

more docs will follow when the feature is merged.

Notes

If you want to use containerd in version 1.2.2 you will probably face this containerd issue if you use images from quay.io

kubernetes-on-openstack's People

Contributors

dschmidtdev avatar endyman avatar hikhvar avatar johscheuer avatar jscheuermann avatar simonkienzler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-on-openstack's Issues

Incompatible with other openstack providers

The current setup is not compatible with other openstack providers because the public network name and the image visibility are hardcoded. The latter even requires a user to have the privileges to create publicly visible images.

Don't use latest version of Image

Currently we use the latest available version of the VM image. This causes a complete recreation of the cluster if one runs the terraform script after the release of a new version of this base image. This will ultimately cause a downtime in production clusters, since the cluster will be empty after recreation. Furthermore, we do not update the nodes in a rolling release.

clouds.yaml not usable for specifying auth_url

With the given example setup it is not possible for me to use the clouds.yaml to authenticate against openstack using terraform:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage.

Error: Error refreshing state: 1 error(s) occurred:
* provider.openstack: One of 'auth_url' or 'cloud' must be specified

However sourcing the openstack rc file works fine for me as described here:
https://github.com/inovex/kubernetes-on-openstack/blob/master/example/Readme.md

kube-apiserver does not respond when bootstrapping cluster

Bildschirmfoto 2019-04-27 um 17 29 30

Bildschirmfoto 2019-04-27 um 17 20 03

netstat on the master-instance shows that kube-apiserver exists, but using tcp6 (not sure if this can be an issue), cloud-init seems to complete successfully on the master-instance.
curling kube-apiserver from the nodes also does not work.

Setup fails due to version constraint with k8s-cni

Related to kubernetes/kubernetes#75683

When installing kubeadm and kubelet, the cloud-init script fails because of a version constraint with kubernetes-cni as of today:

   kubeadm : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed
   kubelet : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed

Solution would be to explicitly define the CNI-Version to be used. I will provide a PR for this.

Master Secgroups need two terraform applys to be set

I experience the issue that after a terraform apply the security group of the master node is set to the default secgroup instead of the specified two secgroups.
When doing a second terraform apply, terraform recognizes that and sets the correct secgroups.

Used versions:

Terraform v0.11.11
+ provider.local v1.1.0
+ provider.openstack v1.16.0
+ provider.random v2.0.0
+ provider.template v2.1.0

Image (architecture/docs)

We should create a minimal image how the setup work and which resources are created (no terraform graph).

LoadBalancer type Service does not seem to work properly

When exposing a Pod externally with a Service of type LoadBalancer, curling the External-IP exits with curl: (52) Empty reply from server while curling the internal Cluster-IP of the Service is working though.

kubectl shows that the Service exists and an ExternalIP is associated:

NAME           TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
nginxservice   LoadBalancer   10.96.239.133   x.x.x.x   80:30512/TCP   86s

To exclude that my resource-configs contain errors, I tried this example from the official OpenStack docs:

---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
   app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxservice
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: nginx
  type: LoadBalancer

When checking the automatically provisioned load balancer, I found out that the load balancer members were in operating status "ERROR", probably due to failing health monitoring checks.
After deleting the health monitor, load balancer members reach operating status "ONLINE", but the External-IP of the service still cannot be curled successfully.

+---------------------+--------------------------------------------------------------------------------------+
| Field               | Value                                                                                |
+---------------------+--------------------------------------------------------------------------------------+
| admin_state_up      | True                                                                                 |
| created_at          | 2019-05-24T09:24:15.139226                                                           |
| description         | Kubernetes external service ab267c9947e0511e9a6f5fa163e13d14 from cluster kubernetes |
| flavor_id           |                                                                                      |
| id                  | 6896bb42-b40e-4d49-be99-f3484ed1daed                                                 |
| listeners           | fe8b074c-d08a-464f-b19f-4e00d6729b1c                                                 |
| name                | ab267c9947e0511e9a6f5fa163e13d14                                                     |
| operating_status    | ERROR                                                                                |
| pools               | c8779882-bb3a-4b29-bb6e-e611e6f1420b                                                 |
| project_id          | b71e73698acd4e64bf72962e8fcd711f                                                     |
| provider            | amphora                                                                              |
| provisioning_status | ACTIVE                                                                               |
| updated_at          | 2019-05-24T09:25:33.693805                                                           |
| vip_address         | 172.16.0.9                                                                           |
| vip_network_id      | a0bb965c-25e9-4bd2-a748-404d0fb8ee3c                                                 |
| vip_port_id         | e732daf2-5502-4cc5-9dbb-3aba3d5e2c1e                                                 |
| vip_qos_policy_id   | None                                                                                 |
| vip_subnet_id       | 8c97a9e5-fa11-4b6a-8364-529e7387c3b5                                                 |
+---------------------+--------------------------------------------------------------------------------------+

Kubernetes Cluster Setup:

module "example-cluster" {
  source = "git::https://github.com/inovex/kubernetes-on-openstack.git"

  auth_url         = "${var.auth_url}"
  cluster_name     = "example-cluster"
  username         = "${var.username}"
  password         = "${var.password}"
  domain_name      = "${data.openstack_identity_auth_scope_v3.scope.project_domain_name}"
  user_domain_name = "${data.openstack_identity_auth_scope_v3.scope.user_domain_name}"
  tenant_name      = "${data.openstack_identity_auth_scope_v3.scope.project_name}"
  project_id = "${data.openstack_identity_auth_scope_v3.scope.project_id}"

  kubernetes_version        = "1.13.2"
  kubernetes_cni_version    = "0.6.0"
  containerd_version        = "1.2.4"
  cluster_network_router_id = "${openstack_networking_router_v2.router.id}"
  node_count                = "1"
  flavor                    = "c4.large"
  master_data_volume_size   = "10"
  node_data_volume_size     = "10"
  public_network_name       = "public"
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.