Giter Club home page Giter Club logo

kube-static-egress-ip's Introduction

kube-static-egress-ip

A Kubernetes CRD to manage static egress IP addresses for workloads

Note: This project is in alpha stage. We are actively working on improving the functionality and incorporating the user feedback. Please see the roadmap. You are welcome to tryout and provide feedback.

Overview

Problem Statement

Kubernetes Ingress and Services provide a good solution for exposing services in the cluster to external clients outside of the cluster. With these constructs, you have fine granular control over which workloads (sets of pods) are exposed, how they are exposed, and who can access them. But what about managing traffic in the reverse direction? How can workloads running in the cluster securely access services outside cluster? Through egress network policies we have basic control of which pods can access what services. However, beyond that Kubernetes does not prescribe how egress traffic is handled. And, Kubernetes CNI network plug-ins provide varying functionalities to handle egress traffic from pods.

One common solution offered across CNIs is to masqurade egress traffic from pods running on a node, to use the node's IP as source IP for outbound traffic. As pod IP's are not necessarily routable from outside the cluster this provides a way for pods to communicate with services outside the cluster. It's not uncommon for most production grade on-premises or cloud deployments to restrict access (i.e. white-list traffic) to services, so that only trusted entities can access the service. This poses a challenge, from a security perspective, for the workloads running in the Kubernetes cluster as there is no predictable egress IP that is used for the outbound traffic from the pods. It is also highly desirable to have fine-grained control on what IP addresses are used for outbound traffic from a workload (set of pods) running on the Kubernetes cluster, as not all workloads in a cluster may be allowed to access the external service.

Solution

kube-static-egress-ip provides a solution with which a cluster operator can define an egress rule where a set of pods whose outbound traffic to a specified destination is always SNAT'ed with a configured static egress IP. kube-static-egress-ip provides this functionality in Kubernetes native way using custom rerources.

For e.g. below is a sample definition of a staticegressip custom resource defined by kube-static-egress-ip. In this example all outbound traffic, from the pods belonging to service frontend, to destination IP 4.2.2.2 will be SNAT'ed to use 100.137.146.100 as source IP. So all the traffic from selected pods to 4.2.2.2 is seen as if they are all coming from 100.137.146.100

apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: eip
spec:
  rules:
  - egressip: 100.137.146.100
    service-name: frontend
    cidr: 4.2.2.2/32

Getting Started

How it works

kube-static-egress-ip is run as a daemon-set on the cluster. Each node takes a role of a director or a gateway. Director nodes redirect traffic from the pods that need static egress IP to one of the nodes in cluster acting as Gateway. A Gateway node is setup to perform SNAT of the traffic from the pods to use configured static egress IP as the source IP. Return traffic is sent back to Director node running the pod. The following diagram depicts life of a packet originating from a pod that needs a static egress IP.

  1. pod 2 sends traffic to a destination.
  2. director node (is setup by kube-static-egress-ip to redirect) redirects the packets to gateway node if pod 2 is sending traffic to a specific destination
  3. node acting as gateway recieves the traffic and perform SNAT (with configured egress IP) and sends out the packet to destination
  4. node recieves the response packet from the destination
  5. node performs DNAT (to pod IP) and forwards the packet to director node
  6. director node forwards the traffic to pod

Plese see the design details to understand in detail how the egress traffic from the pods is sent across the cluster to achive static egress IP functionality.

Installation

kube-static-egress-ip is pretty easy to get started with.

Install the staticegressip Custom Resource Definition (CRD) as follows:

kubectl apply -f https://raw.githubusercontent.com/nirmata/kube-static-egress-ip/master/config/static-egressip-crd.yaml

Create necessary RBAC to run the controllers

kubectl apply -f https://raw.githubusercontent.com/nirmata/kube-static-egress-ip/master/config/static-egressip-rbac.yaml

Next you need to install deployment for static-egressip-gateway-manager which automatically selects nodes to act as gateway for a StaticEgressIP custom resource

kubectl apply -f https://raw.githubusercontent.com/nirmata/kube-static-egress-ip/master/config/static-egressip-gateway-manager.yaml

You shall see the pod running for the deployment created for static-egressip-gateway-manager

kubectl get pods -o wide -n kube-system -l name=static-egressip-gateway-manager                                                                                                                                                                                                           ❯❯❯
NAME                                              READY     STATUS    RESTARTS   AGE       IP             NODE            NOMINATED NODE   READINESS GATES
static-egressip-gateway-manager-d665565cb-hwrts   1/1       Running   0          25m       10.244.2.208   falnnel-node2   <none>           <none>
static-egressip-gateway-manager-d665565cb-qtnms   1/1       Running   0          25m       10.244.1.187   flannel-node1   <none>           <none>
static-egressip-gateway-manager-d665565cb-xwdgr   1/1       Running   0          25m       10.244.1.186   flannel-node1   <none>           <none>

Finally you need to install a daemonset which runs static-egressip-controller on each node configures a node to act as director or gateway for a StaticEgressIP custom resource.

kubectl apply -f https://raw.githubusercontent.com/nirmata/kube-static-egress-ip/master/config/static-egressip-controller.yaml

You shall see the pods running on each node of the cluster. For e.g.

kubectl get pods -o wide -n kube-system -l k8s-app=static-egressip-controller                                                                                                                                                                                                             ❯❯❯
NAME                               READY     STATUS    RESTARTS   AGE       IP              NODE             NOMINATED NODE   READINESS GATES
static-egressip-controller-jbgf5   1/1       Running   0          20m       192.168.1.201   flannel-node1    <none>           <none>
static-egressip-controller-k4w59   1/1       Running   0          20m       192.168.1.200   flannel-master   <none>           <none>
static-egressip-controller-lhn5l   1/1       Running   0          20m       192.168.1.202   falnnel-node2    <none>           <none>

At this point you are all set to deploy staticegressip objects and see things in action.

staticegressip resources

You can then create a staticegressip resource object like any other Kubernetes resource object

spec:
  rules:
  - egressip: 100.137.146.100
    service-name: frontend
    cidr: 4.2.2.2/32

Spec consists of one or more rule's. Each rule defines the following"

  • service-name: kubernetes service whose selected pods are the traffic source
  • cidr: the desitination address for the egress traffic from the selected pods
  • egressip: IP address to which traffic should be SNAT, hence providing a static egress IP

Please modify provided example example manifest as per your setup to try out.

Goals

  • a generic solution that works across popular CNI's like Flannel, Weave, Calico etc
  • a scalable solution where the role of Gateway can be spread across more than one node
  • a solution that is higly available and resilient to node failures
  • fine grained controls to choose the set of pods by namespace, service or general label selectors etc in staticegressip resource
  • no compromise to egress network policies enforcement
  • automatically selection of node Gateway role, via leader election, which does not require manual involvement

Status

Here is quick status of the project:

  • suppports CNI's that support direct routing of pod traffic to other nodes. Flannel host-gw backeend mode, Calico and kube-router can used
  • operator has to manually choose a node to act of Gateway by annotating the node
  • only a single node acts as a gateway
  • no high-availability, if node acting as Gateway dies functionliaty no longer works.
  • egress IP's specified in the staticegressipare expected to be routable to the node acting as Gateway in the cluster
  • supports selection of pods selected by provided Kubernetes service name.

Roadmap

  • support CNI's that provide overlay solution like Flannel with vxlan backend, Weave etc
  • support leader election among the nodes so that operator does not have to choose and configure a node a Gateway
  • support more than a single node to act as Gateway
  • when a node acting as Gateway dies, reconfigure so that a new node performs Gateway functionliaty.

kube-static-egress-ip's People

Contributors

jimbugwadia avatar lyyao09 avatar murali-reddy avatar nicolaischmid avatar prateekgogia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-static-egress-ip's Issues

Update of the StaticEgressIP object does not work properly

Various updates of the StaticEgressIP object does not work properly.

Remove the StaticEgressIP object

Setup a staticegressip then delete it with;

kubectl delete staticegressip egressip-alpine

The egress-ip function is removed, outgoing connects from the pod are again NAT'ed to the node ip.

On the directors everything seems ok; The ipset and the rule in the mangle table on the directors are correctly removed.

But on the gateway some settings are not cleaned-up;

The SNAT rule is not removed;

Chain STATIC-EGRESS-NAT-CHAIN (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    1    60 SNAT       all  --  *      *       0.0.0.0/0            192.168.2.0/24       match-set EGRESS-IP-QPAZYHZ2OUEYTPUQ src to:15.0.0.1

Also the ipset is not removed, but all entries are flushed;

# ipset list EGRESS-IP-QPAZYHZ2OUEYTPUQ
Name: EGRESS-IP-QPAZYHZ2OUEYTPUQ
Type: hash:ip
Revision: 4
Header: family inet hashsize 1024 maxelem 65536 timeout 0
Size in memory: 88
References: 2
Number of entries: 0

Update the egressip

The egressip in an existing StaticEgressIP object is updated from
"15.0.0.1" to "15.0.0.13".

On the directors nothing should be altered, and it isn't.

On the gateway the old SNAT rule is not removed which I guess is the same issue as described above for removal of the object. The new SNAT rule is fortunately inserted before the old one so it seem to take precedence and the egress-ip is SNAT'ed correctly.

Chain STATIC-EGRESS-NAT-CHAIN (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    1    60 SNAT       all  --  *      *       0.0.0.0/0            192.168.2.0/24       match-set EGRESS-IP-QPAZYHZ2OUEYTPUQ src to:15.0.0.13
    0     0 SNAT       all  --  *      *       0.0.0.0/0            192.168.2.0/24       match-set EGRESS-IP-QPAZYHZ2OUEYTPUQ src to:15.0.0.1

Update the cidr

The cidr in the StaticEgressIP object is updated from "192.168.2.0/24" to "111.0.0.0/24".

Connects to the new cidr are correctly SNAT'ed to the egress-ip, but connects to the old cidr are still (incorrectly) also SNAT'ed.

On directors the new cidr is added but the old one is not removed;

# ip ro show table kube-static-egress-ip
111.0.0.0/24 via 192.168.1.3 dev eth1 
192.168.2.0/24 via 192.168.1.3 dev eth1 

On the gateway the old SNAT rule remains;

    1    60 SNAT       all  --  *      *       0.0.0.0/0            111.0.0.0/24         match-set EGRESS-IP-QPAZYHZ2OUEYTPUQ src to:15.0.0.1
    2   120 SNAT       all  --  *      *       0.0.0.0/0            192.168.2.0/24       match-set EGRESS-IP-QPAZYHZ2OUEYTPUQ src to:15.0.0.1

Pod without incoming traffic

Hi, i would like to ask if my pods/deployments just call external services so they do not need the incoming traffics then how i configure using kube-static-egress-ip?

Thank you in advance!

Merging Kube-static-egress-ip with MetalLB Loadbalancer

Hello,

Can anyone suggest how can I configure my service so that Loadbalancer's IP and egress-IP both are same for some service.

Any pointers on how can we achive that with or without coding will be helpful.

I am good with Golang and basic networking, haven't gone too much in kubernetes/overlay networking.

Thanks in Advance

Static Egress IP per namespace

Would be useful to have a static egress IP per namespace to further tighten access to external resources. Perhaps even egress ip per <services A, B, C> tuple.

How to assign a static IP on GKE to the egress

I am asking myself how the static IP is assigned and can be used in GKE.
The readme does not tell about this and it looks like you could give an IP.
Can someone tell me how this is working?

Allow rules to specify egress cidr to allow egress from multiple nodes.

Right now a director is required because a single static egress IP is desired. The downside being the director itself is a point of failure. What if there is a situation where I would rather have traffic go out from whatever node the source pod is on, and am less picky about the specific IP?

In my case, each kube node has multiple interfaces available. All egress traffic by default will go out as the public IP address calico is managing, but particular services I would like to use a secondary range.

What if the StaticEgressIP crd allowed a cidr instead of a single IP?

apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: eip
spec:
  rules:
  - egresscidr: 100.137.146.0/24
    service-name: frontend
    cidr: 4.2.2.2/32

The routing rules would then be a bit simpler:

  • If the local node has an interface with a matching IP, send traffic from the frontend pods to the destination cidr on that interface.
  • If not, forward to the director as before.

I feel like such a solution could be a fairly painless way to relax the requirement of a single director for most traffic in some environments.

Does this seem like a possibility?

SNAT doesn't take effect

Hi, the messages to the specified destination can be routed to the gateway. However the SNAT doesn't take effect, since Calico ensures it's always the first rule in POSTROUTING, even though I manually move STATIC-EGRESS-NAT-CHAIN before MASQUERADE.

$ sudo iptables -L STATIC-EGRESS-NAT-CHAIN -t nat
Chain STATIC-EGRESS-NAT-CHAIN (1 references)
target     prot opt source               destination
SNAT       all  --  anywhere             10.124.200.68        match-set EGRESS-IP-A6RUBTJVWO4N6RIK src to:10.41.82.253
$ sudo iptables -L POSTROUTING -t nat --line-number
Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    cali-POSTROUTING  all  --  anywhere             anywhere             /* cali:O3lYWMrLQYEMJtB5 */
2    STATIC-EGRESS-BYPASS-CNI  all  --  anywhere             anywhere
3    STATIC-EGRESS-NAT-CHAIN  all  --  anywhere             anywhere
4    KUBE-POSTROUTING  all  --  anywhere             anywhere             /* kubernetes postrouting rules */
5    MASQUERADE  all  --  bovis-z1020-172-17-0-0.extern.sw.ericsson.se/16  anywhere
$  sudo iptables -L cali-POSTROUTING -t nat --line-number
Chain cali-POSTROUTING (1 references)
num  target     prot opt source               destination
1    cali-fip-snat  all  --  anywhere             anywhere             /* cali:Z-c7XtVd2Bq7s_hA */
2    cali-nat-outgoing  all  --  anywhere             anywhere             /* cali:nYKhEzDlr11Jccal */
3    MASQUERADE  all  --  anywhere             anywhere             /* cali:JHlpT-eSqR1TvyYm */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL
$  sudo iptables -L cali-fip-snat -t nat --line-number
Chain cali-fip-snat (1 references)
num  target     prot opt source               destination
$  sudo iptables -L cali-nat-outgoing -t nat --line-number
Chain cali-nat-outgoing (1 references)
num  target     prot opt source               destination
1    MASQUERADE  all  --  anywhere             anywhere             /* cali:Dw4T8UWPnCLxRJiI */ match-set cali40masq-ipam-pools src ! match-set cali40all-ipam-pools dst

By the way, the ipipEnabled is true in calico's configuration.

egress ip doesnt work

Hello,

Im new to iptable level details for troubleshooting. here is what i tried to setup and test

VM1 - run nc -lvk 5500
VM2 - telnet <VM1_IP_ADDRESS> 5500 <=== This works as its directly on VM to VM over a reachable IP Address

Now, setup egress-staticIP

    - egressip: <VM2_IP_ADDRESS>
    service-name: frontend
    cidr: <VM1_IP_ADDRESS>/32

After applying rule tried telnet from pod matched by service
kubectl exec -ti <POD_NAME> -- telnet <VM1_IP_ADDRESS> 5500 <= This is expected to work. But it doesn't.

Operation System: RHEL 7.8
Kubernetes : 1.14

Please advise on how to troubleshoot

static-egressip-controller is getting CrashLoopBackOff

kubectl get pods -o wide -n kube-system -l k8s-app=static-egressip-controller
NAME                               READY   STATUS             RESTARTS      AGE   IP              NODE                                NOMINATED NODE   READINESS GATES
static-egressip-controller-rh6hd   0/1     CrashLoopBackOff   2 (20s ago)   38s   10.194.205.15   preprod-pool-1-468a8c918adf48b4b7   <none>           <none>
static-egressip-controller-sm5sg   0/1     CrashLoopBackOff   2 (21s ago)   38s   10.75.12.9      preprod-pool-1-78acfc47c1734c329a   <none>           <none>
kubectl logs -p static-egressip-controller-sm5sg -n kube-system
I0401 09:08:40.272551       1 main.go:45] Running Nirmata static egress ip controller version: 
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1a0 pc=0xed62aa]

goroutine 1 [running]:
github.com/nirmata/kube-static-egress-ip/pkg/utils.GetNodeIP(0x0, 0xc42034a120, 0x0, 0x0, 0x0, 0x136c1e0)
        /workspace/golang/src/github.com/nirmata/kube-static-egress-ip/pkg/utils/node.go:47 +0x3a
github.com/nirmata/kube-static-egress-ip/pkg/controller.NewEgressIPController(0x13a7dc0, 0xc42034a120, 0x137bda0, 0xc4203fd340, 0x13759e0, 0xc42042c600, 0x136fce0, 0xc42042c690, 0x12)
        /workspace/golang/src/github.com/nirmata/kube-static-egress-ip/pkg/controller/controller.go:81 +0x63
main.main()
        /workspace/golang/src/github.com/nirmata/kube-static-egress-ip/cmd/static-egressip-controller/main.go:68 +0x30b

High level design for static egress configuration

Hi

I validated and documented the second approach I was talking about for running NAT gateway as a POD, here

https://docs.google.com/document/d/1usUZQ6q3o9n23IH7iOF6OSa4BvaZidnBTj5tj-o-8Ks/edit#heading=h.2q2h912eh0x2

I validated the approach by running a POD in a Kubernetes cluster and configuring IPTABLES rules to do DNAT and SNAT inside this POD. A NAT pod will have a one-to-one mapping to a backend service to keep it simple to start with, later a single NAT gateway may forward traffic to multiple backend services. Now any traffic coming in to this NAT gateway gets forwarded to the backend application with source IP as a VIP IP configured as secondary IP on this pods interface (eth0:0).

Please provide feedback and we can discuss this in our next meeting.

weave net CNI support

Hi, first of all, I wanna say thanks for a nice attempt for solving this K8s egress problem, if I understand right, this solution doesn't support weave net CNI for now? I tried to test it on my own on-premise K8s cluster with Weave CNI but it looks like it does not work properly.

Failed to get endpoints object for service due to endpoints not found

Following through the instructions on the main README.md, I end up with the following error when trying to apply a new egress rule.

System notes:
k8s_kubespray_version = "v2.10.3"
k8s_version = "v1.14.3"
Networking - Calico

Steps to reproduce:

Follow docs on readme through to the point of customising the staticegressipip resource. Everything comes up as expected (controllers and gateways are running, until the static egressip resource is applied.

egress object definition:

egress-spec.yaml

apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: test
spec:
  rules:
  - egressip: 192.168.71.183
    service-name: busybox2-busybox
    cidr: 192.168.71.204/32

Was running a busybox helm install to test the egress rules

helm install --name busybox-mon jfelten/busybox --namespace kube-system

but have tried adding egress against existing services and getting the same error - have also tried using endpoint address and internal cluster DNS, even though it's specified as service name in the docs, just to check - same error.

I1118 10:35:36.496798       1 main.go:45] Running Nirmata static egress ip controller version: 
I1118 10:35:36.498962       1 controller.go:85] Setting up event handlers to handle add/delete/update events to StaticEgressIP resources
I1118 10:35:36.498995       1 controller.go:107] Starting StaticEgressIP controller
I1118 10:35:36.507200       1 controller.go:117] Configured node to act as a egress traffic gateway
I1118 10:35:36.514791       1 controller.go:329] Adding StaticEgressIP: default/test
I1118 10:35:36.515533       1 director.go:87] Node has been setup for static egress IP director functionality successfully.
I1118 10:35:36.515549       1 controller.go:127] Configured node to act as a egress traffic director
I1118 10:35:36.515553       1 controller.go:130] Waiting for informer caches to sync
I1118 10:35:36.615732       1 controller.go:135] Starting workers
I1118 10:35:36.615766       1 controller.go:141] Started workers
I1118 10:35:36.615818       1 controller.go:233] Processing update to StaticEgressIP: default/test
E1118 10:35:36.632545       1 controller.go:274] Failed to get endpoints object for service busybox2-busybox due to endpoints "busybox2-busybox" not found
E1118 10:35:36.632667       1 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/workspace/golang/src/github.com/nirmata/kube-static-egress-ip/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72

Services in kubectl get services --namespace kube-system

busybox2-busybox       ClusterIP   10.233.27.118   <none>        80/TCP                   2d18h
coredns                ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP   19d
kubernetes-dashboard   ClusterIP   10.233.21.101   <none>        443/TCP                  19d
tiller-deploy          ClusterIP   10.233.60.188   <none>        44134/TCP                19d

kubectl get endpoints --namespace kube-system

busybox2-busybox          10.233.117.48:80                                              2d19h
coredns                   10.233.64.29:53,10.233.88.10:53,10.233.64.29:53 + 3 more...   19d
kindred-ibis-busybox      <none>                                                        2d19h
kube-controller-manager   <none>                                                        19d
kube-scheduler            <none>                                                        19d
kubernetes-dashboard      10.233.117.11:8443                                            19d
tiller-deploy             10.233.112.1:44134                                            19d

No ready nodes

I applied all the standard configuration and the standard example with a 0.0.0.0/0CIDR and one of the Gateway Pods is constantly logging:

2020/01/20 11:17:25 Failed to allocate a Gateway node for static egress IP custom resource: test due to: Failed to allocate gatewway as there are no ready nodes

Am I missing something? The development docs said something about an annotations. But that doesn't seem to work either.
I appreciate any help

Help with the initiali testing

Im very new to this routing concept

I have setup all the components as per the instructions.

I deployed a service and gave the static egress configuration as below..

Spec:
Rules:
Cidr: 29.90.189.150
Egressip: 11.17.2.208
Service - Name: web

How do I test the traffic now..How can i confirm that the traffic is coming from 11.17.2.208
Any help on this is appreciated

define CR and build CRD controller

  • define custom resource to represent the static egress IP for the set of pods (selected by labels, namespace etc)
  • use code generator to generate listers, informers and clienset
  • build CRD controller to watch for add/delete/list events of the custom resource

Unable to redirect the traffic in my k8s cluster

I am following the installation but I am not able to route the pod traffic from my VM (VM 1) to another gateway VM (VM 2). I am using Calico CNI and there is only 1 master node (VM 2) and 1 worker node (VM1).

I have a clusterIP service, the actual pod is living in VM1 and it wants that service traffic route to the VM2 as gateway. VM1 and VM2 are in the same k8s cluster.

There are two problems I observed:

  1. The additional route in kube-egress-static-ip table is never used. I have to add the exact same route in the default routing table so make the traffic route to the gateway VM.

  2. (After I manually fixed #1) When the pod traffic leave the VM1, calico uses the node IP address. But the ipset that you configure only include the pod IP only and therefore it doesn't hit to the SNAT rule that you configure and so the traffic never SNAT. If I manually add the node IP in the ipset, everything will work.

My question is that during your setup, how can you make the pod traffic use the pod IP when it is leave the VM?

Also, the custom routing table doesn't work for me. What is the intention to use the custom routing table?

segfault from static-egressip-controller

Managed to load everything but seeing segfaults from static-egressip-controller. No hints as to what is happening. Any ideas?

Running on k8s 1.14.3 (edited the CRDs to use the older beta API version) with Flannel.

$ kubectl get pods -o wide -n kube-system -l name=static-egressip-gateway-manager
NAME                                               READY   STATUS             RESTARTS   AGE   IP             NODE                    NOMINATED NODE   READINESS GATES
static-egressip-gateway-manager-56b76ccbbf-59cr4   0/1     CrashLoopBackOff   1          8s    10.233.69.36   n2   <none>           <none>
static-egressip-gateway-manager-56b76ccbbf-k6mpc   0/1     CrashLoopBackOff   1          8s    10.233.68.22   n3   <none>           <none>
static-egressip-gateway-manager-56b76ccbbf-rxxpp   0/1     CrashLoopBackOff   1          8s    10.233.67.34   n1   <none>           <none>

$ kubectl -n kube-system logs static-egressip-gateway-manager-56b76ccbbf-59cr4
I0609 23:15:52.898442       1 main.go:45] Running Nirmata static egress ip controller version:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1a0 pc=0xed62aa]

goroutine 1 [running]:
github.com/nirmata/kube-static-egress-ip/pkg/utils.GetNodeIP(0x0, 0xc420317b00, 0x0, 0x0, 0x0, 0x136c1e0)
        /workspace/golang/src/github.com/nirmata/kube-static-egress-ip/pkg/utils/node.go:47 +0x3a
github.com/nirmata/kube-static-egress-ip/pkg/controller.NewEgressIPController(0x13a7dc0, 0xc420317b00, 0x137bda0, 0xc4204152d0, 0x13759e0, 0xc420444600, 0x136fce0, 0xc420444690, 0x12)
        /workspace/golang/src/github.com/nirmata/kube-static-egress-ip/pkg/controller/controller.go:81 +0x63
main.main()
        /workspace/golang/src/github.com/nirmata/kube-static-egress-ip/cmd/static-egressip-controller/main.go:68 +0x30b

Fails to SNAT to the given static EIP (AWS)

I have a simple one node EKS cluster, where i have deployed kube-static-egress to SNAT traffic from pods within the cluster to an EIP that i had generated. My CRD looks like this

apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: test
spec:
  rules:
  - egressip: <EIP that i generated in the same subnet as the node>
    service-name: http-svc <My service which is fronting the pod>

With this setup, am unable to egress with the ip provided. In the static-egressip-controller logs, i can see that SNAT failed with the following error

E1014 14:09:56.802847       1 controller.go:373] Failed to add egress IP 192.168.10.139 for the staticegressip shared-nat-customer1/test on the gateway due to failed to find interface
I1014 14:09:56.802861       1 controller.go:216] Successfully synced 'shared-nat-customer1/test'
I1014 14:10:26.761241       1 controller.go:396] Updating StaticEgressIP: shared-nat-customer1/test
I1014 14:10:26.766465       1 controller.go:250] Processing update to StaticEgressIP: shared-nat-customer1/test
I1014 14:10:26.804443       1 gateway.go:87] Created ipset name: EGRESS-IP-3V5VGT4JGNTLSRYL
I1014 14:10:26.805732       1 gateway.go:96] Added ips [192.168.10.144 192.168.11.236] to the ipset name: EGRESS-IP-3V5VGT4JGNTLSRYL
E1014 14:10:26.806772       1 controller.go:369] Failed to setup rules to send egress traffic on gateway%!(EXTRA string=Failed to verify rule exists in STATIC-EGRESS-FORWARD-CHAIN chain of filter tablerunning [/sbin/iptables -t filter -C STATIC-EGRESS-FORWARD-CHAIN -m set --set EGRESS-IP-3V5VGT4JGNTLSRYL src -d  -j ACCEPT --wait]: exit status 2: --set option deprecated, please use --match-set
iptables v1.6.2: host/network `' not found
Try `iptables -h' or 'iptables --help' for more information.
)
E1014 14:10:26.807080       1 controller.go:373] Failed to add egress IP <EIP> for the staticegressip shared-nat-customer1/test on the gateway due to failed to find interface

What am i missing here?

My EKS K8s version: 1.20

No interface found error

When we try the traffic is coming from pod ip and not from egress ip with error no interface found

unable to ping destination IP

I am trying to apply nirmata/kube-static-egress-ip to my cluster on bare-metal.
but, I can't access(ping) destination IP(172.30.1.103)

my env is below

  • CNI: weave
$ kubectl get node -o wide
NAME      STATUS    ROLES     AGE       VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION CONTAINER-RUNTIME
dev      Ready     master    190d      v1.11.2   172.30.1.100   <none>  Ubuntu 18.04.1 LTS   4.15.0-33-generic   docker://18.6.1
node01    Ready     <none>    190d      v1.11.2   172.30.1.102   <none>  Ubuntu 18.04.1 LTS   4.15.0-46-generic   docker://18.6.1
apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: test
spec:
  rules:
  - egressip: 172.30.1.102
    service-name: nginx-with-curl
    cidr: 172.30.1.103/24
$ kubectl get pod -nkube-system -o wide
NAME                          READY     STATUS    RESTARTS   AGE       IP             NODE      NOMINATED NODE
coredns-78fcdf6894-d9nc7      1/1       Running   19         190d      10.32.0.47     dev       <none>
coredns-78fcdf6894-fgq55      1/1       Running   19         190d      10.32.0.45     dev       <none>
egressip-controller-njdmb     1/1       Running   7          23h       172.30.1.102   node01    <none>
egressip-controller-vnmjq     1/1       Running   4          23h       172.30.1.100   dev       <none>
etcd-dev                      1/1       Running   19         190d      172.30.1.100   dev       <none>
kube-apiserver-dev            1/1       Running   19         190d      172.30.1.100   dev       <none>
kube-controller-manager-dev   1/1       Running   19         190d      172.30.1.100   dev       <none>
kube-proxy-wdkgw              1/1       Running   20         190d      172.30.1.100   dev       <none>
kube-proxy-xfft6              1/1       Running   14         190d      172.30.1.102   node01    <none>
kube-scheduler-dev            1/1       Running   20         190d      172.30.1.100   dev       <none>
weave-net-8q82g               2/2       Running   41         190d      172.30.1.102   node01    <none>
weave-net-92nxn               2/2       Running   59         190d      172.30.1.100   dev       <none>

egressip from configMap

Hello everyone,
I am configuring my K8s cluster to use a static egress IP via your resource. It's quite a bummer, though, that the egress IP is literal in the StaticEgressIP, since it would make more sense to be able to have it set up in a configMap for cluster-wide change in case it is needed.
Think you can implement it sooner or later? Or am I missing something?
Thanks.

Daemonset tolerations for node taints

1 toleration must be added to the file config/static-egressip-controller.yaml
for the pod to be scheduled to all nodes:

tolerations:
- effect: NoSchedule
  operator: Exists

Design question about Gateway role

I understand that need to route the outbound traffic that needs to be SNATed to a dedictaed node in some situations.
However, I wonder if you plan to have a mode where this outbound traffic would be SNATed directly by the originating node? In other words could this extra step (the Gateway one) be optional?

No clean-up when the egressip service is removed

When the service defined by service-name: in the StaticEgressIP is removed nothing is cleaned up. The egress-ip function remains fully operable.

Here is the manifest used;

apiVersion: v1
kind: Service
metadata:
  name: egressip-alpine
spec:
  selector:
    app: alpine
  ports:
  - name: probe
    port: 5001
---
apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: egressip-alpine
spec:
  rules:
  - egressip: 15.0.0.13
    service-name: egressip-alpine
    cidr: 192.168.2.0/24

Then the service is removed with;

kubectl delete svc egressip-alpine

Design assumption isn't passing NFS test

I have a requirement where I wanted to provide a static Egress IP address for the use case where we are using NFS server with ACL but your model assumption is that the container IP address is already assigned before it provides destination static IP. If container startup is dependent on NFS filesystem, container cannot start thus no container IP...

I am wondering if perhaps my use case is just not part of this alpha but I can see this pattern in our Kubernetes deployments using NFS without ability to maintain worker node VM IP stickiness.

Thoughts?

Support for a not equal in cidr

Hi,
Is there a way we can provide. Cidr so that all traffic from a pod which is not cluster internal is directed to egress director? Will 0.0.0.0/0 work like this?

Thank you

Which kernel parameters does Egress may depend on

@murali-reddy,I have been using egress for a while and it‘s working well.

However, when using it on the newly installed Centos7.6 operating system recently, it was discovered that the egress traffic from the director to the gateway was discarded by gateway(Occurs probabilistically when installing a new operating system). I guarantee that the iptables rules and policy routing are configured correctly.

The strange thing is that the location of the traffic loss is between the filter.FORWARD and mangle.POSTROUTING chains of iptables(It can be seen by iptables trace log). I don’t know what happened and there is no corresponding debugging method.
Aug 4 16:01:56 node41 kernel: TRACE: filter:FORWARD:policy:10 IN=eth0 OUT=eth0 MAC=0c:da:41:1d:ca:e6:0c:da:41:1d:63:4c:08:00 SRC=177.177.254.67 DST=100.100.2.14 LEN=84 TOS=0x00 PREC=0x00 TTL=62 ID=12251 DF PROTO=ICMP TYPE=8 CODE=0 ID=181 SEQ=1674

At present, I suspect that it may be related to the kernel parameters of the operating system. I don’t know what kernel parameters are needed (except rp_filter), or is there any debugging method ?

Failed to setup routes

logs

I0723 19:18:46.507067       1 director.go:102] Created ipset name: EGRESS-IP-XFVL3XZHQWBKAPWE
I0723 19:18:46.507676       1 director.go:111] Added ips [192.168.7.10] to the ipset name: EGRESS-IP-XFVL3XZHQWBKAPWE
I0723 19:18:46.508735       1 director.go:127] iptables rule in mangle table PREROUTING chain to match src to ipset
E0723 19:18:46.522160       1 controller.go:286] Failed to setup routes to send the egress traffic to gateway due to Failed to add route in custom route table due to: exit status 2
I0723 19:18:46.522179       1 controller.go:199] Successfully synced 'resequip/test'

manifest

apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: test
spec:
  rules:
  - egressip: 10.35.12.65
    service-name: backend
    cidr: 10.35.12.17/32

service

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"backend"},"name":"backend","namespace":"resequip"},"spec":{"ports":[{"name":"http","port":80}],"selector":{"app":"backend"}}}
    metallb.universe.tf/address-pool: oebs
  creationTimestamp: "2019-07-12T01:43:53Z"
  labels:
    app: backend
  name: backend
  namespace: resequip
  resourceVersion: "249156023"
  selfLink: /api/v1/namespaces/resequip/services/backend
  uid: 81891851-a446-11e9-96c8-0050562c0156
spec:
  clusterIP: 192.168.241.54
  externalTrafficPolicy: Local
  healthCheckNodePort: 32592
  loadBalancerIP: 10.35.12.65
  ports:
  - name: http
    nodePort: 30342
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: backend
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 10.35.12.65

Support for ipv6-only and later dual-stack

Ipv6-only is supported in k8s since 1.9, but still in "alpha" (due to lack of e2e testing), but the work with dual-stack is under way and ipv6 should be supported.

Support of ipv6 is not so hard in golang, most network functions are already dual-stack. For kube-static-egress-ip I think it may be possible to be dual-stack compliant right from start. The dual-stack KEP proposes different services for ipv4 and ipv6. kube-static-egress-ip can probably be supporting both depending on the address format. I don't think the CRD have to be altered.

For calls to ip rule/route a -6 parameter must be added but otherwise the calls are similar. ip6tables must be used instead of iptables but otherwise the calls are similar. Ipset is a bit of a problem because different set's must be created that differs in name, e.g. a ipv6: prefix, and note, the ipset names must be <32 characters.

I/O timeout on manager

I have this error on the manager :

reflector.go:135] github.com/nirmata/kube-static-egress-ip/pkg/client/informers/externalversions/factory.go:117: Failed to list *v1alpha1.StaticEgressIP: Get https://10.96.0.1:443/apis/staticegressips.nirmata.io/v1alpha1/staticegressips?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

leaderelection.go:306] error retrieving resource lock kube-system/static-egress-ip-configmap: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/static-egress-ip-configmap: dial tcp 10.96.0.1:443: i/o timeout

And the egress does not work.

the 10.96.0.1:443 is referring the service of kubernetes in the default namespace :
default kubernetes ClusterIP 10.96.0.1 443/TCP 54d

I followed the installation from https://github.com/nirmata/kube-static-egress-ip

`kube-static-egress-ip` results in rp_filter issues in case of multiple physical networks

kube-static-egress-ip directs the traffic from the director node to gateway node by using policy based routing. For eg. if a staticegressip custom resource is created as below.

apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: eip
spec:
  rules:
  - egressip: 100.137.146.100
    service-name: frontend
    cidr: 4.2.2.2/32

traffic from the pods that need static egress IP for traffic destined to 4.2.2.2 should be steered to gateway node. While this works fine for nodes with single network interface, it will fail if the node has multiple interfaces.

For e.g if node has two interfaces eth0, eth1. Default route to 4.4.4.4 is via eth0 and if director sends the traffic to gateway node via eth1 then this results in RPF (reverse path filtering) dropping the packets.

We can disable RPF but its not desirable. kube-static-egress-ip should work through RPF issues using policy based routing.

make kube-static-egress-ip work with overlay network CNI's

kube-static-egress-ip works with assumption that director node can forward traffic to the gateway nodes. CNI's can be broadly considered to fall into two categories. Once that work using overlay networks, other using direct routing.

Flannel (Vxlan backend), Weave etc use VXLAN to overlay pod-to-pod traffic on underlay traffic. So underlay network never see the pod-to-pod or pod-to-node traffic.

Current implementation of kube-static-egress-ip works only with CNI that support direct routing of the traffic from pods to a node. For e.g calico, kube-router, Flannel (host-gw backend) allows pod traffic to be sent as is to a different node.

This issue is placeholder to enhance kube-static-egress-ip to support static egress IP functionality even for CNI's that uses overlay networking. Basically kube-static-egress-ip should be able to steer traffic from director node to gateway node using overlay network.

Traffic is not properly routed after configuring StaticEgressIP

I would like to use the static egress functionality.

CNI: calico

I installed the CRD, RBAC, gateway-manager and controller just like the readme described.

Test env, 2 ubuntu replicas along a headless service for discovery:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment-deployment
spec:
  selector:
    matchLabels:
      app: test-deployment
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: test-deployment
    spec:
      containers:
      - name: test-deployment
        image: ubuntu:bionic
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  clusterIP: None
  selector:
    app: test-deployment
  ports:
  - port: 80
    targetPort: 80

Afterwards, I configured the following StaticEgressIP:

apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: test-egress
spec:
  rules:
  - egressip: 51.15.136.12
    service-name: frontend
    cidr: 151.115.41.82/32

When StaticEgressIP resource is in place, traffic no longer reaches the target machine, running traceroute shows:

root@test-deployment-deployment-7469d4b659-lqxcr:/# traceroute 151.115.41.82
traceroute to 151.115.41.82 (151.115.41.82), 30 hops max, 60 byte packets
 1  10.64.24.117 (10.64.24.117)  0.084 ms  0.112 ms  0.051 ms
 2  * * *
 3  * * *
 4  * * *
 5  * * *
 6  * * *
 7  * * *

Without it, traceroute successfully reaches the target machine:

root@test-deployment-deployment-7469d4b659-lqxcr:/# traceroute 151.115.41.82
traceroute to 151.115.41.82 (151.115.41.82), 30 hops max, 60 byte packets
 1  10.64.24.117 (10.64.24.117)  0.143 ms  0.025 ms  0.022 ms
 2  10.64.24.116 (10.64.24.116)  0.699 ms  0.606 ms  0.561 ms
 3  10.66.0.1 (10.66.0.1)  1.004 ms  0.974 ms  0.949 ms
 4  * * *
 5  10.194.0.8 (10.194.0.8)  0.863 ms 10.194.0.10 (10.194.0.10)  0.918 ms 10.194.0.12 (10.194.0.12)  0.889 ms
 6  212.47.225.212 (212.47.225.212)  1.182 ms 212.47.225.242 (212.47.225.242)  0.995 ms 212.47.225.196 (212.47.225.196)  0.862 ms
 7  51.158.8.177 (51.158.8.177)  0.925 ms 51.158.8.181 (51.158.8.181)  1.260 ms 51.158.8.177 (51.158.8.177)  1.130 ms
 8  be4751.rcr21.b022890-0.par04.atlas.cogentco.com (149.6.164.41)  1.374 ms  1.363 ms be4752.rcr21.b039311-0.par04.atlas.cogentco.com (149.6.165.65)  1.331 ms
 9  * be3739.ccr31.par04.atlas.cogentco.com (154.54.60.185)  2.036 ms  2.006 ms
10  be2102.ccr41.par01.atlas.cogentco.com (154.54.61.17)  2.022 ms be3184.ccr42.par01.atlas.cogentco.com (154.54.38.157)  1.941 ms be2103.ccr42.par01.atlas.cogentco.com (154.54.61.21)  2.154 ms
11  be12266.ccr42.ams03.atlas.cogentco.com (154.54.56.173)  13.727 ms  13.694 ms  13.710 ms
12  be2815.ccr41.ham01.atlas.cogentco.com (154.54.38.206)  20.503 ms be2816.ccr42.ham01.atlas.cogentco.com (154.54.38.210)  20.788 ms be2815.ccr41.ham01.atlas.cogentco.com (154.54.38.206)  20.467 ms
13  be2483.ccr21.waw01.atlas.cogentco.com (130.117.51.61)  32.825 ms  32.705 ms  33.101 ms
14  be2486.rcr21.b016833-0.waw01.atlas.cogentco.com (154.54.37.42)  32.946 ms  33.318 ms  34.252 ms
15  be174.waw1dc1-net-bb02.scaleway.com (149.14.232.242)  34.141 ms  34.108 ms be174.waw1dc1-net-bb01.scaleway.com (149.14.232.234)  34.077 ms
16  151.115.2.9 (151.115.2.9)  33.449 ms 151.115.2.3 (151.115.2.3)  33.371 ms  33.488 ms
17  * * *
18  * * *
19  * * *
20  * * *
21  82-41-115-151.instances.scw.cloud (151.115.41.82)  33.627 ms  33.707 ms  33.599 ms

My kube-system looks like this:

NAME                                               READY   STATUS    RESTARTS   AGE   IP               NODE                                             NOMINATED NODE   READINESS GATES
calico-kube-controllers-7d7d7cdc47-8tkzx           1/1     Running   0          15h   100.64.186.0     scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
calico-node-jsl7j                                  1/1     Running   0          15h   10.70.118.71     scw-k8s-musing-lamport-default-994c6503cacc4bc   <none>           <none>
calico-node-llqbb                                  1/1     Running   0          15h   10.73.152.13     scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
calico-node-s6ns8                                  1/1     Running   0          15h   10.64.24.117     scw-k8s-musing-lamport-default-649f6dc7bc2c43d   <none>           <none>
coredns-565d4499db-5ztj2                           1/1     Running   0          15h   100.64.185.192   scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
csi-node-5ch57                                     2/2     Running   0          15h   10.64.24.117     scw-k8s-musing-lamport-default-649f6dc7bc2c43d   <none>           <none>
csi-node-df82j                                     2/2     Running   0          15h   10.70.118.71     scw-k8s-musing-lamport-default-994c6503cacc4bc   <none>           <none>
csi-node-msgn4                                     2/2     Running   0          15h   10.73.152.13     scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
konnectivity-agent-477pp                           1/1     Running   0          15h   10.64.24.117     scw-k8s-musing-lamport-default-649f6dc7bc2c43d   <none>           <none>
konnectivity-agent-9dxx5                           1/1     Running   0          15h   10.73.152.13     scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
konnectivity-agent-c85d5                           1/1     Running   0          15h   10.70.118.71     scw-k8s-musing-lamport-default-994c6503cacc4bc   <none>           <none>
kube-proxy-cwz69                                   1/1     Running   0          15h   10.64.24.117     scw-k8s-musing-lamport-default-649f6dc7bc2c43d   <none>           <none>
kube-proxy-dzlxm                                   1/1     Running   0          15h   10.70.118.71     scw-k8s-musing-lamport-default-994c6503cacc4bc   <none>           <none>
kube-proxy-gfxmg                                   1/1     Running   0          15h   10.73.152.13     scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
metrics-server-c6ffb4c7c-dhwgc                     1/1     Running   0          15h   100.64.185.193   scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
node-problem-detector-l276k                        1/1     Running   0          15h   100.64.185.195   scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
node-problem-detector-mwt74                        1/1     Running   0          15h   100.65.226.1     scw-k8s-musing-lamport-default-994c6503cacc4bc   <none>           <none>
node-problem-detector-sfpb9                        1/1     Running   0          15h   100.64.46.193    scw-k8s-musing-lamport-default-649f6dc7bc2c43d   <none>           <none>
static-egressip-controller-59k2v                   1/1     Running   0          14h   10.64.24.117     scw-k8s-musing-lamport-default-649f6dc7bc2c43d   <none>           <none>
static-egressip-controller-7f7md                   1/1     Running   0          14h   10.70.118.71     scw-k8s-musing-lamport-default-994c6503cacc4bc   <none>           <none>
static-egressip-controller-cgrcl                   1/1     Running   0          14h   10.73.152.13     scw-k8s-musing-lamport-default-8a4fd2fcd9a54fe   <none>           <none>
static-egressip-gateway-manager-56d44c7959-5f5cl   1/1     Running   0          14h   100.64.46.199    scw-k8s-musing-lamport-default-649f6dc7bc2c43d   <none>           <none>
static-egressip-gateway-manager-56d44c7959-fwx92   1/1     Running   0          14h   100.65.226.3     scw-k8s-musing-lamport-default-994c6503cacc4bc   <none>           <none>
static-egressip-gateway-manager-56d44c7959-glhpq   1/1     Running   0          14h   100.65.226.4     scw-k8s-musing-lamport-default-994c6503cacc4bc   <none>           <none>

Some logs from the controller:

...
I0331 10:02:38.819257       1 director.go:114] Created ipset name: EGRESS-IP-4WD4DQOP5IBSOYWC
I0331 10:02:38.823511       1 director.go:123] Added ips [100.64.46.198 100.65.226.2] to the ipset name: EGRESS-IP-4WD4DQOP5IBSOYWC
I0331 10:02:38.825738       1 director.go:139] iptables rule in mangle table PREROUTING chain to match src to ipset
I0331 10:02:38.835630       1 director.go:188] added routing entry in custom routing table to forward destinationIP to egressGateway
I0331 10:02:38.836271       1 controller.go:216] Successfully synced 'default/test-egress'
I0331 10:03:08.796713       1 controller.go:396] Updating StaticEgressIP: default/test-egress
I0331 10:03:08.801994       1 controller.go:250] Processing update to StaticEgressIP: default/test-egress
I0331 10:03:08.838108       1 director.go:114] Created ipset name: EGRESS-IP-4WD4DQOP5IBSOYWC
I0331 10:03:08.841882       1 director.go:123] Added ips [100.64.46.198 100.65.226.2] to the ipset name: EGRESS-IP-4WD4DQOP5IBSOYWC
I0331 10:03:08.845503       1 director.go:139] iptables rule in mangle table PREROUTING chain to match src to ipset
I0331 10:03:08.856632       1 director.go:188] added routing entry in custom routing table to forward destinationIP to egressGateway
I0331 10:03:08.856673       1 controller.go:216] Successfully synced 'default/test-egress'

Logs from the selected gateway-manager:

...
2021/03/31 10:04:42 Gateway: dabdf368-d079-4f50-a9e6-47e4a324d2c2 is choosen for static egress ip test-egress
2021/03/31 10:04:47 Current gateway node: scw-k8s-musing-lamport-default-994c6503cacc4bc is ready so keeping same node as gateway
2021/03/31 10:04:47 Gateway: dabdf368-d079-4f50-a9e6-47e4a324d2c2 is choosen for static egress ip test-egress
2021/03/31 10:04:52 Current gateway node: scw-k8s-musing-lamport-default-994c6503cacc4bc is ready so keeping same node as gateway
2021/03/31 10:04:52 Gateway: dabdf368-d079-4f50-a9e6-47e4a324d2c2 is choosen for static egress ip test-egress
2021/03/31 10:04:57 Current gateway node: scw-k8s-musing-lamport-default-994c6503cacc4bc is ready so keeping same node as gateway
2021/03/31 10:04:57 Gateway: dabdf368-d079-4f50-a9e6-47e4a324d2c2 is choosen for static egress ip test-egress

Is something wrong with my configuration?

From the last part of the readme "operator has to manually choose a node to act of Gateway by annotating the node". Which annotation should be used on which node? Also what gateway Ip should be

I tried doing this without any success (traffic is still routed trough 10.64.24.117):

kubectl annotate --overwrite node scw-k8s-musing-lamport-default-994c6503cacc4bc "nirmata.io/staticegressips-gateway=10.70.118.71"

Failed to add egress IP due to failed to find interface

Hi,

I follow this installation : https://github.com/nirmata/kube-static-egress-ip#installation and I get this error after apply my deployment (given at the end of this post).

I deploy only one static-egressip-controller pod (by nodeSelector selection) on node with 10.205.14.166 IP.

Here is the error :

I0219 16:55:17.086137       1 controller.go:233] Processing update to StaticEgressIP: default/egressip
I0219 16:55:17.092408       1 gateway.go:86] Created ipset name: EGRESS-IP-2XT2FC5FTMR7KU3B
I0219 16:55:17.095040       1 gateway.go:95] Added ips [10.233.71.11] to the ipset name: EGRESS-IP-2XT2FC5FTMR7KU3B
I0219 16:55:17.097205       1 gateway.go:108] Added rules in filter table FORWARD chain to permit traffic
E0219 16:55:17.099137       1 controller.go:314] Failed to add egress IP 10.205.14.166 for the staticegressip default/egressip on the gateway due to failed to find interface
I0219 16:55:17.099162       1 controller.go:199] Successfully synced 'default/egressip'

Can anyone help me ? Am I missing something?

Thanks!!!

My deploment :

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: busybox
  name: busybox
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      nodeSelector:
        egress-busybox: "true"
      containers:
      - name: busybox
        image: busybox:latest
        args:
        - /bin/sh
        - -c
        - while (true); do date; wget <third_app_ip>; sleep 1; done;
---
apiVersion: v1
kind: Service
metadata:
  name: busybox
  namespace: default
spec:
  ports:
  - name: web
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: busybox
---
apiVersion: staticegressips.nirmata.io/v1alpha1
kind: StaticEgressIP
metadata:
  name: egressip
  namespace: default
spec:
  rules:
  - egressip: 10.205.14.166
    service-name: busybox
    cidr: <third_app_ip>/32

Use overlay network to transfer traffic from director node to gateway node

We hit several roadbacks in trying to find a solution that works with CNI's that do direct routing and CNI's that use overlay networks. Also finding a solution that works cross subnet was challenging without use ovelay/tunneling.

It seems a reasonable solution that is agnostinc is to use overlay network to direct traffic from director node to gateway node and same overlay network to send the return traffic back to the node. We get two advantages

  • solution that is agnostic of CNI used
  • solution that works across cross subnets or zones

Proposal to revamp the project with overlay network solution. Choice of overlay (VXLAN/IP-in-IP etc) is yet to be decided. Will update this issue as progress is made and will share the decisions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.