Giter Club home page Giter Club logo

Comments (13)

rchenzheng avatar rchenzheng commented on June 15, 2024 12

Any news on fixing this?

from metallb.

danderson avatar danderson commented on June 15, 2024 4

Good question! The initial prototype used gobgp, but I kept having issues and headaches with it. The API surface is completely undocumented (both the Go API and the gRPC API), and is completely un-idiomatic Go (frankly it feels like python that was ported word for word into Go). So when I (frequently) encountered strange behaviors, it was a chore to debug.

Aside from that, GoBGP also implements way more stuff than MetalLB needs. It wants to be a real router, implementing the full BGP convergence algorithm and all that. All that behavior is actively harmful to what MetalLB wants to do (which is just push a couple of routes and ignore everything the peer sends). It's possible to make it work, by segmenting things into VRFs, adding policies to drop all inbound routes, ... But at that point I'm writing significant amounts of code, against an undocumented library that implements 10x what we need, to trick it into not doing 99% of what it tries to do by default. That's a lot of additional complexity just to serialize and transmit an update message.

With that said, there's obviously downsides as well, and the big one is that OSRG has money to buy vendor gear and do interop testing on their quirky BGP stacks, and I don't.

What version of MetalLB did you use for your PoC? Yesterday I updated the manifests to point to a version that should resolve the main source of unhappy peers (not advertising MP-BGP extensions), so I'm curious if this makes JunOS happy as well. If it doesn't, any chance you could file a bug with a pcap of the BGP exchange?

from metallb.

danderson avatar danderson commented on June 15, 2024 2

You're correct about the additional behavior of the Cluster policy. Courtesy of kube-proxy, it will do "perfect" even distribution across all service pods, at the cost of rewriting the source IP when it NATs.

As for the coupling of speaker and node... I agree, it's a tradeoff. I picked the one I picked mostly arbitrarily, based on past experiences at Google where this behavior was beneficial. But other options are just as valid, as illustrated in #148 for example - in that bug, the person effectively wants the BGP state to reflect what k8s thinks at all times, regardless of the health of individual speaker pods. It's a different set of tradeoffs, in that some failure modes become better and others become worse.

The good news is, when we get add-path, we suddenly have more choices as to which behavior we want... Although I worry about offering too many configuration options, I already have a hard time testing all combinations sufficiently :)

from metallb.

halfa avatar halfa commented on June 15, 2024

Open question: why are you re-implementing a bpg library specifically for MetalLB instead if using something like GoBGP ? I've done a PoC running it against JunOS routers and encountered issue on the capabilities advertisement.

from metallb.

pdf avatar pdf commented on June 15, 2024

Correct me if I'm wrong, but without add-path support, aren't we in a situation where there is actually no load-balancing done when externalTrafficPolicy: Local is specified? Because MetalLB only announces a single next-hop, and expects k8s to perform the actual load-balancing, whereas k8s expects load-balancers to distribute traffic accross service group members, especially with externalTrafficPolicy: Local?

from metallb.

danderson avatar danderson commented on June 15, 2024

That's not accurate. Without add-path, the router receiving multiple advertisements can still choose to do multipath routing between all peers. This is because each path is distinct, in terms of BGP's decision algorithm, because each one comes from a different router ID.

You still need a router capable of doing multipath (~all of the professional ones, and most of the entry level ones), and you have to explicitly enable that behavior (otherwise it will just pick one of the paths and only use that one - this is true even with add-path enabled).

The issue we have without add-path is that the traffic distribution is on a per-node basis, not a per-pod basis. So if you have multiple pods for a service on a single node, it can lead to traffic imbalances, as explained in more detail at https://metallb.universe.tf/usage/#local-traffic-policy.

I should add, and maybe this is the source of confusion: in BGP mode, each speaker is only advertising itself as a nexthop, it's not attempting to advertise the routes on behalf of other machines. So, from the external router's perspective, it sees N routers, each advertising routes that point to themselves - not N routers all advertising the exact same thing.

from metallb.

pdf avatar pdf commented on June 15, 2024

I should add, and maybe this is the source of confusion: in BGP mode, each speaker is only advertising itself as a nexthop, it's not attempting to advertise the routes on behalf of other machines.

Indeed, this contributes to my confusion - what happens in the case where you have externalTrafficPolicy: Local services to route that exist on nodes that aren't running MetalLB speakers?

from metallb.

danderson avatar danderson commented on June 15, 2024

Those pods/nodes won't get any traffic. When using the Local traffic policy, each node is responsible for attracting traffic to itself, if it contains pods for the relevant service.

In fact, this is almost exactly the same as the Cluster traffic policy behavior: each node is responsible for attracting its own traffic, the only difference is that in Cluster mode the attraction is unconditional, and in Local mode there must be at least 1 service pod on the machine.

Once we have add-path, I guess we could revisit that topology. However, there is one big benefit to having each node the only one responsible for attracting its own traffic: if the node fails, the BGP session breaks immediately, and traffic stops flowing to the broken node without having to wait for k8s to converge and notice that the pods are unhealthy. This greatly speeds up draining bad nodes (~0s vs. tens of seconds for a gray failure like power loss).

from metallb.

pdf avatar pdf commented on June 15, 2024

Those pods/nodes won't get any traffic. When using the Local traffic policy, each node is responsible for attracting traffic to itself, if it contains pods for the relevant service.

Right, makes sense now that I understand how this is designed to work (and indeed, I hadn't enabled multipath on the upstream routers in my lab so I was only seeing a single route, derp).

In fact, this is almost exactly the same as the Cluster traffic policy behavior: each node is responsible for attracting its own traffic, the only difference is that in Cluster mode the attraction is unconditional, and in Local mode there must be at least 1 service pod on the machine.

The other difference, I believe, between Local and Cluster is that Cluster traffic will still arrive at a node that can handle the traffic even if that node is not a speaker, since k8s will NAT it to get there.

there is one big benefit to having each node the only one responsible for attracting its own traffic: if the node fails, the BGP session breaks immediately

I can see logic in this approach for fault-detection, vs waiting for k8s to detect problems, though I do wonder if this coupling is always ideal.

Thanks kindly for the explanations, everything makes perfect sense now.

from metallb.

bjornryden avatar bjornryden commented on June 15, 2024

If I understand this correctly, I see a problem with add-path in regards to how many routes are going into the receiving routers. At a very quick glance the limitations in ECMP routes per destination ranges from 16 (older custom silicon from C/J) to 1024 (Trident 2+).
C/J both implements a weighted ECMP solution based on BGP communities, which would reduce the amount of routes needed to nodes (with active pods) and not pods.
Juniper implementation:
https://www.juniper.net/documentation/en_US/junos/topics/example/bgp-multipath-unequal.html
Cisco implementation:
https://supportforums.cisco.com/t5/service-providers-documents/asr9000-xr-understanding-unequal-cost-multipath-ucmp-dmz-link/ta-p/3138853
IETF Draft:
https://tools.ietf.org/html/draft-ietf-idr-link-bandwidth-07

from metallb.

danderson avatar danderson commented on June 15, 2024

Yup, ECMP group size is a concern. The 100% fix for that is WCMP (weighted cost multipath), which the silicon typically supports but the upper layer protocols don't really.

Thanks for the reference on Cisco and Juniper's approach to this! The RFC raises some questions for me, so I'll have to consult with a couple of networking friends... But it seems plausible to implement.

Regarding ECMP group size: IIRC, Broadcom Scorpion, the predecessor of Trident, already supported 128-way ECMP, and later generations of silicon go up to 512 or even 1024-way ECMP. So, for enterprises that are using even semi-modern hardware, the group size shouldn't be a huge issue. Homelabbers and people using lower-tier hardware may indeed have issues, however, I agree. There's not much I can do about that, except offer the option of using community+bandwidth balancing for people who have compatible hardware.

from metallb.

CJP10 avatar CJP10 commented on June 15, 2024

I am curious if removing additional paths that were added by add-path have any significant downside.

Assuming they don't, even capping the paths at 16 by default to support older hardware would greatly improve load distribution.

My initial thought is have the node with the most pods allocate 16 paths and the node with the least allocate a single path. All the other nodes, fall in between based off their distribution between the two.

Repeat this process when the distribution of pods changes and this should add some sort of "weighted" routing

from metallb.

Jean-Daniel avatar Jean-Daniel commented on June 15, 2024

Now that metallb switched to FRR, it may be implemented using weighted ECMP. This is an use case FRR is explicitly supporting:

http://docs.frrouting.org/en/stable-7.5/bgp.html#weighted-ecmp-using-bgp-link-bandwidth

One of the primary use cases of this capability is in the data center when a service (represented by its anycast IP) has an unequal set of resources across the regions (e.g., PODs) of the data center

from metallb.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.