Giter Club home page Giter Club logo

helm-charts's Introduction

kube-vip

High Availability and Load-Balancing

Build and publish main image regularly

Overview

Kubernetes Virtual IP and Load-Balancer for both control plane and Kubernetes services

The idea behind kube-vip is a small self-contained Highly-Available option for all environments, especially:

  • Bare-Metal
  • Edge (arm / Raspberry PI)
  • Virtualisation
  • Pretty much anywhere else :)

NOTE: All documentation of both usage and architecture are now available at https://kube-vip.io.

Features

Kube-Vip was originally created to provide a HA solution for the Kubernetes control plane, over time it has evolved to incorporate that same functionality into Kubernetes service type load-balancers.

  • VIP addresses can be both IPv4 or IPv6
  • Control Plane with ARP (Layer 2) or BGP (Layer 3)
  • Control Plane using either leader election or raft
  • Control Plane HA with kubeadm (static Pods)
  • Control Plane HA with K3s/and others (daemonsets)
  • Service LoadBalancer using leader election for ARP (Layer 2)
  • Service LoadBalancer using multiple nodes with BGP
  • Service LoadBalancer address pools per namespace or global
  • Service LoadBalancer address via (existing network DHCP)
  • Service LoadBalancer address exposure to gateway via UPNP
  • ... manifest generation, vendor API integrations and many more...

Why?

The purpose of kube-vip is to simplify the building of HA Kubernetes clusters, which at this time can involve a few components and configurations that all need to be managed. This was blogged about in detail by thebsdbox here -> https://thebsdbox.co.uk/2020/01/02/Designing-Building-HA-bare-metal-Kubernetes-cluster/#Networking-load-balancing.

Alternative HA Options

kube-vip provides both a floating or virtual IP address for your kubernetes cluster as well as load-balancing the incoming traffic to various control-plane replicas. At the current time to replicate this functionality a minimum of two pieces of tooling would be required:

VIP:

  • Keepalived
  • UCARP
  • Hardware Load-balancer (functionality differs per vendor)

LoadBalancing:

  • HAProxy
  • Nginx
  • Hardware Load-balancer (functionality differs per vendor)

All of these would require a separate level of configuration and in some infrastructures multiple teams in order to implement. Also when considering the software components, they may require packaging into containers or if they’re pre-packaged then security and transparency may be an issue. Finally, in edge environments we may have limited room for hardware (no HW load-balancer) or packages solutions in the correct architectures might not exist (e.g. ARM). Luckily with kube-vip being written in GO, it’s small(ish) and easy to build for multiple architectures, with the added security benefit of being the only thing needed in the container.

Troubleshooting and Feedback

Please raise issues on the GitHub repository and as mentioned check the documentation at https://kube-vip.io.

Contributing

Thanks for taking the time to join our community and start contributing! We welcome pull requests. Feel free to dig through the issues and jump in.

⚠️ This project has issue compiling on MacOS, please compile it on linux distribution

Star History

Star History Chart

helm-charts's People

Contributors

davidspek avatar dgiebert avatar elmariofredo avatar geraldwuhoo avatar interdependence avatar jkremser avatar lazyfrosch avatar lilhermit avatar marthydavid avatar maxaudron avatar olivier-jobert avatar pysen avatar sea-you avatar sergeiwaigant avatar thebsdbox avatar yaocw2020 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

helm-charts's Issues

No way to set routerID to podIP using kube-vip Chart

According to the docs it is recommended to use the podIP as bgp_routerid:

- name: bgp_routerid
  valueFrom:
    fieldRef:
      fieldPath: status.podIP

Unfortunately due to the way env values are templated, it doesn't seem possible to set a valueFrom key when deploying kube-vip using helm.

Let to configure command line args

In a scenario with multiple running KubeVIP we have to customize the Prometheus HTTP listener with the parameter --prometheusHTTPServer. The Helm chart doesn't allow to specify extraArgs to the application.

Sample Yaml configuration:

    spec:
      containers:
      - args:
        - manager
        - --prometheusHTTPServer=0.0.0.0:2113

Missing documentation, helm chart doesn't work

Is there any documentation for this Helm chart? I tried running it as-is on a bare metal k3s cluster, but it does not work at all with the given defaults. (is there any CI for new releases running for this?). The options in values.yaml are also vague as to what values they accept and what they do.

Cloud provider helm deployment needs updated - affinity missing? no HA

In attempting to deploy the helm chart for kube-vip-cloud-provider I noticed podAntiAffinity wasnt picked up from the values file for deploying a HA setup to multiple nodes. This ended up with all pods deployed to the same node - defeating the HA setup.

The helm Deployment file seems to not pick up the affinity section from the values file.

This is different than what is deployed from the normal deployment file.

Expectations: If adding podAntiAffinity to the values file I expect that to be respected and deploy 1 pod per node.

Here is the deployment yaml when attempting to run a 3 replica deployment with helm

kubectl describe deployment -n kube-system kube-vip-cloud-provider
Name:                   kube-vip-cloud-provider
Namespace:              kube-system
CreationTimestamp:      Wed, 04 Oct 2023 01:21:34 +0000
Labels:                 app.kubernetes.io/managed-by=Helm
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: kube-vip-cloud-provider
                        meta.helm.sh/release-namespace: kube-system
Selector:               app.kubernetes.io/instance=kube-vip-cloud-provider,app.kubernetes.io/name=kube-vip-cloud-provider
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=kube-vip-cloud-provider
                    app.kubernetes.io/name=kube-vip-cloud-provider
  Service Account:  kube-vip-cloud-provider
  Containers:
   kube-vip-cloud-provider:
    Image:      kubevip/kube-vip-cloud-provider:v0.0.7
    Port:       <none>
    Host Port:  <none>
    Command:
      /kube-vip-cloud-provider
      --leader-elect-resource-name=kube-vip-cloud-controller
    Limits:
      cpu:     100m
      memory:  128Mi
    Requests:
      cpu:        50m
      memory:     64Mi
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-vip-cloud-provider-754c6674b (3/3 replicas created)
Events:          <none>

Here is the pod yaml showing tolerations but no affinity

kubectl describe pod -n kube-system kube-vip-cloud-provider-754c6674b-tphwm
Name:             kube-vip-cloud-provider-754c6674b-tphwm
Namespace:        kube-system
Priority:         0
Service Account:  kube-vip-cloud-provider
Node:             server-01/x.x.x.x
Start Time:       Wed, 04 Oct 2023 01:21:35 +0000
Labels:           app.kubernetes.io/instance=kube-vip-cloud-provider
                  app.kubernetes.io/name=kube-vip-cloud-provider
                  pod-template-hash=754c6674b
Annotations:      <none>
Status:           Running
IP:               10.42.0.148
IPs:
  IP:           10.42.0.148
Controlled By:  ReplicaSet/kube-vip-cloud-provider-754c6674b
Containers:
  kube-vip-cloud-provider:
    Container ID:  containerd://f273286093b6e2d50a5eb662b6ea92a2909fb4eced4c80053fcce37be9f4d057
    Image:         kubevip/kube-vip-cloud-provider:v0.0.7
    Image ID:      docker.io/kubevip/kube-vip-cloud-provider@sha256:07bc28af895dc8bc04489bf79a92f74d4b0e325c863b6711cc0655b4ca0fd19b
    Port:          <none>
    Host Port:     <none>
    Command:
      /kube-vip-cloud-provider
      --leader-elect-resource-name=kube-vip-cloud-controller
    State:          Running
      Started:      Wed, 04 Oct 2023 01:21:46 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  128Mi
    Requests:
      cpu:        50m
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6fgkp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-6fgkp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
Events:                      <none>

What works without helm

This setup works when using a manifest file instead of helm

      affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchLabels:
                    component: kube-vip-cloud-provider
                topologyKey: kubernetes.io/hostname
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: node-role.kubernetes.io/master
                  operator: Exists
              - matchExpressions:
                - key: node-role.kubernetes.io/control-plane
                  operator: Exists

A loadBalancer service were failing on port 80 only

Before I switched to metallb(it doesn't solve all my problems), I was trying the kube-vip.
nginx-ingress were getting the hardcoded IP address, and I was able to connect to the exposed IP on port 443 for argocd, however it was randomly entering to failing state for port 80.
I mac address change were correlating with that, I could freely access the service on all the nodePort ports on all hosts, forwarding the ingress were also working fine. ipvsadm -ln were showing no listener changes. I hardly can image what could be the reason. Here is my configuration and the looped curl output.

apiVersion: v2
name: kube-vip
type: application
version: 0.4.4
appVersion: "0.4.1"
dependencies:
- name: kube-vip
  version: 0.4.4
  repository: https://kube-vip.github.io/helm-charts
kube-vip:
  affinity: {}
  config:
    address: "192.168.113.113"
  env:
    cp_enable: "false"
    lb_enable: "false"
    lb_port: "6443"
    svc_enable: "true"
    vip_arp: "true"
    vip_cidr: "192.168.113.0/24"
    vip_interface: ""
    vip_leaderelection: "true"
  envFrom: []
  envValueFrom: {}
  fullnameOverride: ""
  image:
    pullPolicy: IfNotPresent
    repository: ghcr.io/kube-vip/kube-vip
    tag: v0.5.11 #I also tried v0.6.2
  imagePullSecrets: []
  nameOverride: ""
  nodeSelector: {}
  podAnnotations: {}
  podSecurityContext: {}
  resources: {}
  securityContext:
    capabilities:
      add:
      - NET_ADMIN
      - NET_RAW
  serviceAccount:
    annotations: {}
    create: true
    name: ""
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
    operator: Exists
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1021 ms: Couldn't connect to server
HTTP/1.1 308 Permanent Redirect
Date: Tue, 05 Sep 2023 10:26:08 GMT
Content-Type: text/html
Content-Length: 164
Connection: keep-alive
Location: https://example.com/argo

HTTP/2 301 
date: Tue, 05 Sep 2023 10:26:09 GMT
content-type: text/html; charset=utf-8
location: /argo/
strict-transport-security: max-age=15724800; includeSubDomains

HTTP/2 200 
date: Tue, 05 Sep 2023 10:26:09 GMT
content-type: text/html; charset=utf-8
content-length: 788
accept-ranges: bytes
content-security-policy: frame-ancestors 'self';
vary: Accept-Encoding
x-frame-options: sameorigin
x-xss-protection: 1
strict-transport-security: max-age=15724800; includeSubDomains

curl: (7) Failed to connect to 192.168.113.200 port 80 after 13 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 14 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 7 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1007 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1010 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1015 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1010 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1010 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1009 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1009 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1016 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1009 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1011 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1009 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1008 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1011 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1010 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 8 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1008 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1011 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1008 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1012 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1018 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1014 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1008 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1010 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1009 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1015 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1009 ms: Couldn't connect to server
curl: (7) Failed to connect to 192.168.113.200 port 80 after 1012 ms: Couldn't connect to server
HTTP/1.1 308 Permanent Redirect
Date: Tue, 05 Sep 2023 10:26:42 GMT
Content-Type: text/html
Content-Length: 164
Connection: keep-alive
Location: https://example.com/argo

HTTP/2 301 
date: Tue, 05 Sep 2023 10:26:42 GMT
content-type: text/html; charset=utf-8
location: /argo/
strict-transport-security: max-age=15724800; includeSubDomains

HTTP/2 200 
date: Tue, 05 Sep 2023 10:26:42 GMT
content-type: text/html; charset=utf-8
content-length: 788
accept-ranges: bytes
content-security-policy: frame-ancestors 'self';
vary: Accept-Encoding
x-frame-options: sameorigin
x-xss-protection: 1
strict-transport-security: max-age=15724800; includeSubDomains

Kube-vip for Control plane only - A lot of restarts

Hello.

I'm on K8S 1.29 (from a RKE2 cluster), and running kube-vip as control plane only HA, and seeing a lot of pod restarts that translate of multiple access loss with the API, which is kind of annoying.

The deployment is rather simple. Values are:

image:
  tag: v0.7.1
env:
  vip_interface: bond0
  vip_leaderelection: true
  svc_enable: false
  cp_enable: true
  vip_leaseduration: 5
  vip_renewdeadline: 3
  vip_retryperiod: 1
[...]

DaemonSet is running, obviously, on control nodes only. The pods look like:

kube-vip-g7d2z 1/1 Running 9 (2m26s ago) 66m
kube-vip-h5vqt 1/1 Running 6 (10m ago) 66m
kube-vip-nkhpr 1/1 Running 5 (2m13s ago) 66m

Logs showing:

level=fatal msg="lost leadership, restarting kube-vip"

Any clue?

helm repo returning 404

https://kube-vip.io/helm-charts now returns a 404

Helm chart: failed to pull chart: looks like "https://kube-vip.io/helm-charts" is not a valid chart repository or cannot be reached: failed to fetch https://kube-vip.io/helm-charts/index.yaml : 404 Not Found

Versions not updated

The version of kube-vip-cloud-provider is v0.0.10 but helm chart deploys v0.0.4
The version of kube-vip is v0.8.1 but helm chart deploys v0.8.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.