Giter Club home page Giter Club logo

Comments (86)

rawdigits avatar rawdigits commented on July 24, 2024 18

Hey y'all! Of anyone having hole punching issues, I'm trying to gauge how many of you allow unsolicited inbound UDP on the router doing the NAT? (Don't feel bad, I did too!) Please react to this message with a thumbs up if you allow all UDP in to your router from the internet and thumbs down if you don't.

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024 16

I share the frustration of dealing with connections that are NAT'd on both sides. Folks could do IP routing or proxying via other nodes, but it defeats the simplicity that nebula brings, and is not a true solution.

Nebula was created as a server-to-server mesh network, but now that we have ported it to every platform (not all released yet, but it works on ios/android), we absolutely need to handle use cases that involve clients behind any kind of NAT or more complex networking scenario, and thus relaying is our only viable option.

Note: relay nodes can be any node on a network, and don't have to be devoted to relaying. The ones you choose to use as relays should, however, have a direct internet connection for them to be useful.

There is a bit of discussion happening in the nebulaoss slack group, but just to make it available here as well (my words reposted):

There are some NATs we just don’t handle well yet. I have a personal interest in doing relaying and am actively working on it again, so hopefully something to share soon.
The current experiments i’m doing involve allowing individual nodes to advertise relay node ips/ports as a way to reach them, which would transparently work around NAT for any node that advertises itself as having a relay.

[...]

I’m envisioning it being a configuration option on nodes and clients, with two sep purposes.
On a relay node it would be something like am_relay: true to signal that the node allows other nodes to use it as a relay (more accurately a bouncer)
On endpoints, especially behind NAT, there would be an option that looks similar to the lighthouse config, something like:

relays:
  {relay_nebula_ip}
  {relay2_nebula_ip}

from nebula.

windwalker78 avatar windwalker78 commented on July 24, 2024 12

I share the frustration of dealing with connections that are NAT'd on both sides. Folks could do IP routing or proxying via other nodes, but it defeats the simplicity that nebula brings, and is not a true solution.

Nebula was created as a server-to-server mesh network, but now that we have ported it to every platform (not all released yet, but it works on ios/android), we absolutely need to handle use cases that involve clients behind any kind of NAT or more complex networking scenario, and thus relaying is our only viable option.

Note: relay nodes can be any node on a network, and don't have to be devoted to relaying. The ones you choose to use as relays should, however, have a direct internet connection for them to be useful.

There is a bit of discussion happening in the nebulaoss slack group, but just to make it available here as well (my words reposted):

There are some NATs we just don’t handle well yet. I have a personal interest in doing relaying and am actively working on it again, so hopefully something to share soon.
The current experiments i’m doing involve allowing individual nodes to advertise relay node ips/ports as a way to reach them, which would transparently work around NAT for any node that advertises itself as having a relay.

[...]

I’m envisioning it being a configuration option on nodes and clients, with two sep purposes.
On a relay node it would be something like am_relay: true to signal that the node allows other nodes to use it as a relay (more accurately a bouncer)
On endpoints, especially behind NAT, there would be an option that looks similar to the lighthouse config, something like:

relays:
  {relay_nebula_ip}
  {relay2_nebula_ip}

This is really cool feature and we need it. We have like 10 clients with 1 lighthouse and sometimes some of clients cannot talk to each other, which still enforces us to use traditional vpn solutions. Is there any information if this will be implemented and the hottest question - when?

from nebula.

brad-defined avatar brad-defined commented on July 24, 2024 11

Nebula 1.6.0 is released with a Relay feature, to cover cases like a Symmetric NAT.
#678

Check out the example config to see how to configure a Nebula node to act as a relay, and how to configure other nodes to identify which Relay can be used by peers for access.


(edit to provide some documentation of the feature)
In order to provide 100% connectivity between Nebula peers in all networks, you may now relay Nebula traffic through a third Nebula peer. I encourage everyone to try out this feature, and let us know how it goes! The config options are included in the Nebula example config:

# EXPERIMENTAL: relay support for networks that can't establish direct connections.
relay:
  # Relays are a list of Nebula IP's that peers can use to relay packets to me.
  # IPs in this list must have am_relay set to true in their configs, otherwise
  # they will reject relay requests.
  #relays:
    #- 192.168.100.1
    #- <other Nebula VPN IPs of hosts used as relays to access me>
  # Set am_relay to true to permit other hosts to list my IP in their relays config. Default false.
  am_relay: false
  # Set use_relays to false to prevent this instance from attempting to establish connections through relays.
  # default true
  use_relays: true

For most personal users of Nebula, the Lighthouse is the ideal relay. To use Relays on your network, do the following:

  • Install Nebula 1.6.0 on all Nebula hosts in your network
  • Edit the config.yml of your lighthouse and set relay.am_relay: true
  • Edit the config.yml of your Nebula peers to specify the lighthouse’s Nebula IP as a relay by setting relay.relays: [<lighthouse Nebula IP>].

Some rules around Relays:

  • Nebula will not acs as a Relay unless configured to do so (relay.am_relay: true).
  • Relays do not have to be Lighthouses.
  • Like Lighthouses, Relay nodes should be deployed with a public internet IP and firewall rules that permit Nebula’s UDP traffic inbound (default UDP port 4242.)
  • Nebula config identifies which hosts may be used by peers as Relays for connectivity. (relay.relays: [ip, ip, ip]) Each of the ip's specified must have relay.am_relay: true set in their configs. Note that you can specify more than one Relay, for high availability.
  • Nebula will not attempt to use a Relay to connect to a peer unless configured to do so (relay.use_relays: true)
  • You aren’t limited to using a single Relay in your network. Each Nebula node can specify its own list of Relays for access. For instance, if you have some Nebula hosts in a private AWS VPC, you can set up a Relay host dedicated to enable connectivity to the peers in that VPC.
  • You can't relay to a Relay. Meaning, hosts configured to act as a relay (with relay.am_relay: true set) may not specify other relays (relay.relays: ) to be used for access.

from nebula.

iamid0 avatar iamid0 commented on July 24, 2024 7

My Config:
nebula-cert sign -name "lighthouse" -ip "192.168.100.1/24"
nebula-cert sign -name "laptop" -ip "192.168.100.101/24" -groups "laptop"
nebula-cert sign -name "server" -ip "192.168.100.201/24" -groups "server"

Lighthouse:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/lighthouse.crt
  key: /etc/nebula/lighthouse.key

static_host_map:
  "192.168.100.1": ["167.71.175.250:4242"]

lighthouse:
  am_lighthouse: true
  interval: 60

listen:
  host: 0.0.0.0
  port: 4242

punchy: true

tun:
  dev: nebula1
  mtu: 1300

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

Laptop:

pki:
  # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/laptop.crt
  key: /etc/nebula/laptop.key

static_host_map:
  "192.168.100.1": ["167.71.175.250:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "192.168.100.1"

listen:
  host: 0.0.0.0
  port: 0

punchy: true

tun:
  dev: nebula1
  mtu: 1300

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

Server:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/server.crt
  key: /etc/nebula/server.key

static_host_map:
  "192.168.100.1": ["167.71.175.250:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "192.168.100.1"

listen:
  host: 0.0.0.0
  port: 0

punchy: true

tun:
  dev: nebula1
  mtu: 1300

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

With this setup, both server and laptop can ping lighthouse, lighhouse can ping server and laptop, but laptop cannot ping server and server cannot ping laptop.

I get messages such as this as it's trying to make the connection:

INFO[0006] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="18.232.11.42:4726" vpnIp=192.168.100.201
INFO[0007] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="172.31.106.61:37058" vpnIp=192.168.100.201
INFO[0009] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="18.232.11.42:4726" vpnIp=192.168.100.201
INFO[0011] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="172.31.106.61:37058" vpnIp=192.168.100.201
INFO[0012] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="18.232.11.42:4726" vpnIp=192.168.100.201
INFO[0014] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="172.31.106.61:37058" vpnIp=192.168.100.201
INFO[0016] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="18.232.11.42:4726" vpnIp=192.168.100.201

I have got the same situation.
node_A <----> lighthouse OK
node_B <----> lighthouse OK
node_A < ----> node_B Not work, cannot ping each other.

But I found, node_A and node_B can communicate with each other ONLY if both are connected to the same router, such as the same WiFi router.

PS
punch_back: true on both node_A and node_B.

No firewall on node_A, node_B and lighthouse.

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024 5

Aha, @nfam I think I spotted the config problem.

instead of

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
  - "LIGHTHOUSE_PUBLIC_IP"

it should be

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
  - "192.168.100.1"

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024 4

How's this for some thread necromancy?!

I believe #325 will solve a massive number of problems people have had with NAT/hole punching. It is a WIP, so if you depend on nebula within your environment, I'd encourage you to wait for an official point release (soon™).

My apologies for taking this long to discover that an optimization created long ago was actually the root cause of so many problems. As I've mentioned in the PR, I'll do a bigger writeup on the root cause soon.

from nebula.

sfxworks avatar sfxworks commented on July 24, 2024 4

wow, I was running into this issue and this post appeared 15 hours ago! Thanks, nebula team.

Can confirm it works! Home PC to a remote server (192.168.32.4) on a remote node with public IP acting as a lighthouse (192.168.32.1)

[root@sam-manjaro ~]# ping 192.168.32.1
PING 192.168.32.1 (192.168.32.1) 56(84) bytes of data.
64 bytes from 192.168.32.1: icmp_seq=1 ttl=64 time=22.2 ms
64 bytes from 192.168.32.1: icmp_seq=2 ttl=64 time=21.3 ms
^C
--- 192.168.32.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 21.310/21.742/22.174/0.432 ms
[root@sam-manjaro ~]# ping 192.168.32.4
PING 192.168.32.4 (192.168.32.4) 56(84) bytes of data.
64 bytes from 192.168.32.4: icmp_seq=1 ttl=64 time=334 ms
64 bytes from 192.168.32.4: icmp_seq=2 ttl=64 time=23.1 ms
64 bytes from 192.168.32.4: icmp_seq=3 ttl=64 time=22.0 ms
^C
--- 192.168.32.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 22.030/126.251/333.623/146.634 ms

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024 3

My Config:
nebula-cert sign -name "lighthouse" -ip "192.168.100.1/24"
nebula-cert sign -name "laptop" -ip "192.168.100.101/24" -groups "laptop"
nebula-cert sign -name "server" -ip "192.168.100.201/24" -groups "server"

Lighthouse:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/lighthouse.crt
  key: /etc/nebula/lighthouse.key

static_host_map:
  "192.168.100.1": ["167.71.175.250:4242"]

lighthouse:
  am_lighthouse: true
  interval: 60

listen:
  host: 0.0.0.0
  port: 4242

punchy: true

tun:
  dev: nebula1
  mtu: 1300

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

Laptop:

pki:
  # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/laptop.crt
  key: /etc/nebula/laptop.key

static_host_map:
  "192.168.100.1": ["167.71.175.250:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "192.168.100.1"

listen:
  host: 0.0.0.0
  port: 0

punchy: true

tun:
  dev: nebula1
  mtu: 1300

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

Server:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/server.crt
  key: /etc/nebula/server.key

static_host_map:
  "192.168.100.1": ["167.71.175.250:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "192.168.100.1"

listen:
  host: 0.0.0.0
  port: 0

punchy: true

tun:
  dev: nebula1
  mtu: 1300

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

With this setup, both server and laptop can ping lighthouse, lighhouse can ping server and laptop, but laptop cannot ping server and server cannot ping laptop.

I get messages such as this as it's trying to make the connection:

INFO[0006] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="18.232.11.42:4726" vpnIp=192.168.100.201
INFO[0007] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="172.31.106.61:37058" vpnIp=192.168.100.201
INFO[0009] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="18.232.11.42:4726" vpnIp=192.168.100.201
INFO[0011] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="172.31.106.61:37058" vpnIp=192.168.100.201
INFO[0012] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="18.232.11.42:4726" vpnIp=192.168.100.201
INFO[0014] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="172.31.106.61:37058" vpnIp=192.168.100.201
INFO[0016] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3339283633 remoteIndex=0 udpAddr="18.232.11.42:4726" vpnIp=192.168.100.201

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024 3

Even if you did allow unsolicited UDP, unless the host behind the NAT was 1:1 NAT'd the NAT device would drop the inbound UDP anyway since it has no idea what host to map it to.

Correct, but there is a key problem: It creates a conntrack entry that causes the host behind NAT to have its outbound port reassigned under certain circumstances. I'm going to write up a blog post or demo this somewhere soon, but it is easy to reproduce and causes an unresolvable race condition.

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024 2

@nfam similar setup. Public lighthouse on digital ocean, laptop on home nat, and server in AWS behind a NAT. Local and AWS are using different private ranges(though overlap should be handled)

from nebula.

fireapp avatar fireapp commented on July 24, 2024 2

hole punch very difficult and random

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024 2

Sooo, it turns out our hole punching may have been too efficient and was triggering race conditions in various connection tracking implementations. We have now nerfed it (slowed it down slightly) and the problems I was having have mostly vanished. Once #210 is merged, I recommend building from source and testing on various NAT setups again, because I believe this exists in a lot of routers/etc and was causing problems for people.

from nebula.

numinit avatar numinit commented on July 24, 2024 2

Relays would be awesome. Is there some unsafe_routes hack we can try in the meantime?

from nebula.

numinit avatar numinit commented on July 24, 2024 2

FWIW, if you want a quick and dirty SOCKS5 proxy with NixOS on your lighthouse, you can use Dante:

  services.dante = {
    enable = true;
    config = ''
      internal: 10.99.0.1 port = 1080
      external: eth0
      clientmethod: none
      socksmethod: none
      client pass {
        from: 10.99.0.0/16 to: 0.0.0.0/0
	    log: error # connect disconnect
      }
      socks pass {
        from: 0.0.0.0/0 to: 0.0.0.0/0
        command: bind connect udpassociate
        log: error # connect disconnect iooperation
      }
      socks pass {
        from: 0.0.0.0/0 to: 0.0.0.0/0
        command: bindreply udpreply
        log: error # connect disconnect iooperation
      }
    '';
  };

from nebula.

nfam avatar nfam commented on July 24, 2024 1

@rawdigits yes, it is. Now both laptops can ping to each other.
Thanks!

from nebula.

spencerryan avatar spencerryan commented on July 24, 2024 1

I also can't get nebula to work properly when both nodes are behind a typical NAT (Technically PAT) regardless of any port pinning I do in the config. They happily connect to the lighthouse I have in AWS but it seems like something isn't working properly. I've got punchy and punchback enabled on everything and it doesn't seem to help. I've tried setting the port on the nodes to 0, and also trying the same port that lighthouse is listening on.

The nodes have no issues connecting to each other over the MPLS, but we don't want that (performance reasons)

Edit: To add a bit more detail, even Meraki's AutoVPN can't deal with this. In their situation the "hub" needs to be told it's public IP and a fixed port that is open inbound. I'd be fine with that as an option, and may be the only reliable one if both nodes are behind different NATs.

Another option I had considered, what if we could use the lighthouses to hairpin traffic? I'd much rather pay AWS for the bandwidth than have to deal with unfriendly NATs everywhere.

from nebula.

gebi avatar gebi commented on July 24, 2024 1

Thx for the feedback!
(i've put the whining at the end, sorry)

Yes ultimately realys are necessary, eg. as tailscale puts it

https://github.com/tailscale/tailscale/blob/master/derp/derp.go#L9

// DERP is used by Tailscale nodes to proxy encrypted WireGuard
// packets through the Tailscale cloud servers when a direct path
// cannot be found or opened. DERP is a last resort. Both sides
// between very aggressive NATs, firewalls, no IPv6, etc? Well, DERP.

But relays should not be used unnecessarily, they are just a last resort.

STUN or ICE do a whole lot to get through nats, but an additional idea would also be to use UPNP or NAT-PMP when configured.

<== snip

I really appreciate your honest answer, though i'm inclined to say that "There are some NATs we just don’t handle well yet" might not quite cut it, in my experience it's "Not at all", our home servers where behind some consumer stuff, but also every other network i tested, corporate / hackerspaces / ... nothing worked except connection to lighthouse (thus the connection should have been working in principle).

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024 1

Last issue first: I recommend trying a random port (instead of choosing a numbered port that is identical on every node) by using port: 0 in the config. That's how it is used in kubernetes in a few places, to avoid reusing a single port number. This is also how i run it on devices behind nat to improve the chances they don't overlap and have to be reassigned a new NAT'd port. (perhaps a thing for you to try as well)

(TBH, I'd just make every non-lighthouse node on any nebula network port: 0 unless you have a restrictive network)

I agree that STUN and ICE plus relaying is a good solution (really the only solution), but it would be useful to know if tailscale is successfully doing NAT traversal in a place we don't or if they fail in the exact same situations, their relaying makes things work. I say this because it is either true that 1) They are falling back to relaying because their hole punching encounters the same issues as ours or 2) Their hole punching is succeeding when ours is failing in some cases.

If point 1 is true, then us doing relaying is the only thing to do.
If point 2 is true, we need to do relaying, but also need to look at hole punching code.

from nebula.

rakundig avatar rakundig commented on July 24, 2024 1

I downloaded the latest master, compiled it, and it worked. Only 1 test so far so needs validation and repeated tests, but looks good so far.

Test scenario: (as-is)
V1.1.0
LH VM Ubuntu 18.04.4 (amd64)
Node A Ubuntu 18.04.4 on metal (arm64)
Node B MacOS on metal (10.14.6)
LH on public with UDP in 4242 allow
Node A behind consumer ATT DSL rtr in KC MO
Node B behind tethered iPhone
Node A ---> ping --> LH == OK
Node B ---> ping --> LH == OK
Node A --> ping --> Node B == Nope
Node B --> ping --> Node A == Nope

Test scenario: (new bins)
Create new LH VM with pub IP and UDP 4242 in allow
New LH VM Ubuntu 18.04.4 (also amd64)
Node A Ubuntu 18.04.4 on metal (same as above)
Node B MacOS on metal (same as above)
Compile new nebula code to create new bins for each
Create new CA
Create new config.yaml (test-config.yaml)
Create new signed certs for nodes and LH (test-*.crt/key)
Fire up new config and certs to use new LH on LH, Node A, Node B
Node A ---> ping --> LH == OK
Node B ---> ping --> LH == OK
Node A --> ping --> Node B == OK
Node B --> ping --> Node A == OK

Validate:
Stop all nebula services, all nodes
Restart with orig config and orig bins (v1.1.0)
Node A ---> ping --> LH == OK
Node B ---> ping --> LH == OK
Node A --> ping --> Node B == Nope
Node B --> ping --> Node A == Nope

So, it seems that the updates have resolved the issue/race condition preventing nodes from finding each other and punching through NAT. I have notified some of my team about my findings so they can validate more thoroughly.

ETA: In the "Validate" scenario I used the new bins on Node A and B, and v1.1.0 bin on LH and it didn't work.

Therefore, all nodes need new bins. Makes sense of course, but I am adding this comment to add that extra test detail for anyone else.

Not a perfect test, but good enough for this AM. As I said, needs more testing, but is looking good so far.

from nebula.

ironicbadger avatar ironicbadger commented on July 24, 2024 1

To add more to this issue I performed some testing today and here were my findings. The architecture is as documented below:

image

Note that 10.10.10.1 and 10.10.10.4 were in different Linode regions and communicating via the Internet.

Working

  • Lighthouse can communicate with all nodes individual using nebula IPs
  • All nodes can communicate with the lighthouse using nebula IP
  • 10.10.10.4 can communicate with 10.10.10.1
  • Both nodes behind OPNsense NAT can communicate with 10.10.10.4

Not working

  • Both nodes behind OPNsense NAT cannot communicate with each other - we see a lot of noise in the logs about handshakes and note that the firewall is seemingly routing the connection correctly in its state tables. However, no communication between these nodes is successful - 0%.

===

edit: Turns out that OPNsense and pfSense firewalls rewrite all outgoing udp packets. Here's how to get around that - https://blog.ktz.me/punching-through-nat-with-nebula-mesh/

image

from nebula.

glugy avatar glugy commented on July 24, 2024 1

I have a lighthouse in Digital Ocean and two version 1.5 clients behind OPNSense on separate networks. I configured the static source ports for outbound NAT in OPNsense as suggested in the previous post. I also needed to set port: 0 on both non-lighthouse nodes. With both changes, everything began to work properly.

Last issue first: I recommend trying a random port (instead of choosing a numbered port that is identical on every node) by using port: 0 in the config. That's how it is used in kubernetes in a few places, to avoid reusing a single port number. This is also how i run it on devices behind nat to improve the chances they don't overlap and have to be reassigned a new NAT'd port. (perhaps a thing for you to try as well)

(TBH, I'd just make every non-lighthouse node on any nebula network port: 0 unless you have a restrictive network)

from nebula.

tarrenj avatar tarrenj commented on July 24, 2024 1

Glad you've got a workable solution!

I'm hearing from a lot of people that the NAT punching isn't as successful as I think the Nebula devs had expected. I remember reading a comment in an older issue/PR thread from one of them about being disappointed that many users aren''t able to use IPv6 since it doesn't have any of these NAT issues. I really hope they'll add in some support for partial mesh implementations soon, and update the readme to explain that it currently only supports 100% full mesh deployments.

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024

Also, to note in this setup all nodes are behind different NATs on different networks. Hub and spoke with the hub being the lighthouse and spokes going to hosts on different networks.

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

My best guess (because I just messed this up in a live demo), is that am_lighthouse may be set to "true" on the individual nodes.

Either way, can you post your lighthouse config and one of your node configs?

(feel free to replace any sensitive IP/config bits, just put consistent placeholders in their place)

from nebula.

nfam avatar nfam commented on July 24, 2024

Hi, I have the same issue. My lighthouse is on a DigitalOcean droplet with public IP. My MacBook and Linux Laptop at home are on the same network both connected to lighthouse. I can ping lighthouse from both laptop, but I cannot ping from one laptop to the other.

Lighthouse config

pki:
  ca: /data/cert/nebula/ca.crt
  cert: /data/cert/nebula/lighthouse.crt
  key: /data/cert/nebula/lighthouse.key
static_host_map:
  "192.168.100.1": ["LIGHTHOUSE_PUBLIC_IP:4242"]
lighthouse:
  am_lighthouse: true
  interval: 60
  hosts:
listen:
  host: 0.0.0.0
  port: 4242
punchy: true
tun:
  dev: neb0
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
logging:
  level: info
  format: text
firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000
  outbound:
    - port: any
      proto: any
      host: any
  inbound:
    - port: any
      proto: icmp
      host: any
    - port: 443
      proto: tcp
      groups:
        - laptop

Macbook config

pki:
  ca: /Volumes/code/cert/nebula/ca.crt
  cert: /Volumes/code/cert/nebula/mba.crt
  key: /Volumes/code/cert/nebula/mba.key
static_host_map:
  "192.168.100.1": ["LIGHTHOUSE_PUBLIC_IP:4242"]
lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
  - "LIGHTHOUSE_PUBLIC_IP"
punchy: true
tun:
  dev: neb0
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
logging:
  level: debug
  format: text
firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000
  outbound:
    - port: any
      proto: any
      host: any
  inbound:
    - port: any
      proto: icmp
      host: any
    - port: 443
      proto: tcp
      groups:
        - laptop

Linux laptop config

pki:
  ca: /data/cert/nebula/ca.crt
  cert: /data/cert/nebula/server.crt
  key: /data/cert/nebula/server.key
static_host_map:
  "192.168.100.1": ["LIGHTHOUSE_PUBLIC_IP:4242"]
lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
  - "LIGHTHOUSE_PUBLIC_IP"
punchy: true
listen:
  host: 0.0.0.0
  port: 4242
tun:
  dev: neb0
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
logging:
  level: info
  format: text
firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000
  outbound:
    - port: any
      proto: any
      host: any
  inbound:
    - port: any
      proto: icmp
      host: any
    - port: 443
      proto: tcp
      groups:
        - laptop

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

@nfam thanks for sharing the config. My next best guess is that nat isn't reflecting and for some reason nodes also aren't finding each other locally.

Try setting the local_range config setting on the two laptops, which can give them a hint about the local network range to use for establishing the direct tunnel.

from nebula.

nfam avatar nfam commented on July 24, 2024

@rawdigits setting local_range does not help.
I stopped nebula on both laptops, set log on lighthouse to debug, cleared log and restarted lighthouse (no node connected to). Following is the log I got.

time="2019-11-23T20:05:18Z" level=info msg="Main HostMap created" network=192.168.100.1/24 preferredRanges="[]"
time="2019-11-23T20:05:18Z" level=info msg="UDP hole punching enabled"
time="2019-11-23T20:05:18Z" level=info msg="Nebula interface is active" build=1.0.0 interface=neb0 network=192.168.100.1/24
time="2019-11-23T20:05:18Z" level=debug msg="Error while validating outbound packet: packet is not ipv4, type: 6" packet="[96 0 0 0 0 8 58 255 254 128 0 0 0 0 0 0 183 226 137 252 10 196 21 15 255 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2 133 0 27 133 0 0 0 0]"

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024

@nfam similar error, not sure it's the problem

Error while validating outbound packet: packet is not ipv4, type: 6 packet="[96 0 0 0 0 8 58 255 254 128 0 0 0 0 0 0 139 176 20 9 146 65 14 250 255 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2 133 0 60 66 0 0 0 0]"
DEBU[0066] Error while validating outbound packet: packet is not ipv4, type: 6 packet="[96 0 0 0 0 8 58 255 254 128 0 0 0 0 0 0 139 176 20 9 146 65 14 250 255 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2 133 0 60 66 0 0 0 0]"

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

@jatsrt

The Error while validating outbound packet can mostly be ignored. Just some types of packet nebula doesn't support bouncing off.

As far as the handshakes, for some reason hole punching isn't working. A few things to try:

  1. Add punch_back: true on the "server" and "laptop" nodes.
  2. explicitly allow all UDP in to the "server" node from the internet (via AWS security groups, just as a test)
  3. verify iptables isn't blocking anything.

Also It appears the logs with the handshake messages are from the laptop? If so can you also share nebula logs from the server as it tries to reach the laptop?

Thanks!

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

adding #40 to cover accidental misconfiguration noted above.

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024

@rawdigits

  1. added punch back on "server" and "laptop"
  2. security group for that node is currently wide open for all protocols
  3. No iptables on any of these nodes, base ubuntu server for testing

Server log:

time="2019-11-24T00:25:21Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="96.252.12.10:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:22Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="96.252.12.10:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:22Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="96.252.12.10:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:23Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="96.252.12.10:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:24Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="192.168.0.22:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:25Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="96.252.12.10:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:26Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="192.168.0.22:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:27Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="96.252.12.10:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:28Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="192.168.0.22:51176" vpnIp=192.168.100.101
time="2019-11-24T00:25:30Z" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1689969496 remoteIndex=0 udpAddr="96.252.12.10:51176" vpnIp=192.168.100.101

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024

So, tried a few more setups, just comes down to what seems like if the two hosts that are trying to communicate with each other are both on different networks and both behind NAT, it will not work.
If the lighthouse does not facilitate the communication/tunneling, this would make sense, but is it meant to be a limitation?

from nebula.

nbrownus avatar nbrownus commented on July 24, 2024

Dual NAT scenario is a bit tricky, possibly room for improvement from nebula's perspective there. Do you have details on the type of NATs you are dealing with?

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024

@nbrownus nothing crazy, I've done multiple AWS VPC NAT gateways with hosts behind them and they cannot connect. I've also tried "home" NAT(google WiFi router based NAT), with no success.

From a networking perspective, I get why it's "tricky" was hoping there was some trick nebula was doing.

from nebula.

nbrownus avatar nbrownus commented on July 24, 2024

@rawdigits can speak to the punching better than I can. If you are having problems in AWS then we can get a test running and sort out the issues.

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024

Yeah, so all my tests have had at least one host behind an AWS NAT Gateway

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

Longshot, but one more thing to try until I set up an AWS NAT GW:
set the UDP port on all nodes to 4242 and let NAT remap it. One ISP I've dealt with blocks the random ephemeral udp ports above 32,000, presumably because they think every high UDP port is bittorrent.

Probably won't work, but easy to test..

from nebula.

jatsrt avatar jatsrt commented on July 24, 2024

@rawdigits same issue

Network combination:
Lighthouse - Digital Ocean NYC3 - Public IP
Server - AWS - Oregon - Private VPC with AWS NAT Gateway (172.31.0.0/16)
Laptop - Verizon FIOS With Google WIFI Router NAT (192.168.1.0/24)
Server2(added later to test) - AWS - Ohio Private VPC with AWS NAT Gateway (10.200.200.0/24)

I added in a second server in a different VPC on AWS to remove the FIOS variable, and had the same results, with server and server2 trying to communicate

INFO[0065] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=760525141 remoteIndex=0 udpAddr="172.31.106.61:4242" vpnIp=192.168.100.201
INFO[0066] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=760525141 remoteIndex=0 udpAddr="18.232.11.42:42005" vpnIp=192.168.100.201
INFO[0067] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=760525141 remoteIndex=0 udpAddr="172.31.106.61:4242" vpnIp=192.168.100.201
INFO[0069] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=760525141 remoteIndex=0 udpAddr="18.232.11.42:42005" vpnIp=192.168.100.201
INFO[0071] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=760525141 remoteIndex=0 udpAddr="172.31.106.61:4242" vpnIp=192.168.100.201
INFO[0072] Handshake message sent                        handshake="map[stage:1 style:ix_psk0]" initiatorIndex=760525141 remoteIndex=0 udpAddr="18.232.11.42:42005" vpnIp=192.168.100.201

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

@jatsrt I'll stand up a testbed this week to explore what may be the cause of the issue. Thanks!

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

I did a bit more research, and it appears that the "AWS Nat Gateway" uses Symmetric NAT, which isn't friendly to hole punching of any kind. NAT gateways also don't appear to support any type of port forwarding, so fixing this by statically assigning and forwarding a port doesn't appear to be an option.

A NAT instance would probably work, but I realize that's probably not a great option. One thing I recommend considering would be to give instances a routable IP address, but disallow all inbound traffic. This wouldn't greatly change the security of your network, since you still aren't allowing any unsolicited packets to reach the hosts, but would allow hole punching to work properly.

from nebula.

spencerryan avatar spencerryan commented on July 24, 2024

I don't think NAT so much is the issue but PAT (port translation). Unfortunately with that you can't predict what your public port will be and hole punching becomes impossible if both ends are behind a similar PAT. I'm going to do some testing, but I think that as long as 1 of 2 nodes has a 1:1 NAT (no port translation) a public IP on the node directly isn't a concern.

If I get particularly ambitious I may attempt to whip up some code in lighthouse to detect when one/both nodes are behind a PAT and throw a warning saying that this won't work out of the box.

from nebula.

wadey avatar wadey commented on July 24, 2024

If I get particularly ambitious I may attempt to whip up some code in lighthouse to detect when one/both nodes are behind a PAT and throw a warning saying that this won't work out of the box

I've thought about this before. You need at least 2 lighthouses, and I think it's best to implement as a flag on the non-lighthouses (when you query the lighthouses for a host, if you get results with the same IP but different ports then you know the remote is problematic).

from nebula.

spencerryan avatar spencerryan commented on July 24, 2024

I haven't dug into the handshake code but if you include the source port in the handshake the lighthouse can compare that to what it sees. If they differ you know something in the middle is doing port translation.

from nebula.

jocull avatar jocull commented on July 24, 2024

Aha, @nfam I think I spotted the config problem.

instead of

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
  - "LIGHTHOUSE_PUBLIC_IP"

it should be

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
  - "192.168.100.1"

I bet this is also my issue... will test it soon. That section is confusing 😕

from nebula.

jocull avatar jocull commented on July 24, 2024

That was not a fix - I had it configured like this already. After more testing I think what I have is hole punching issue with my NAT.

  • The lighthouse is a DigitalOcean droplet with a public IP and open port 4242 via UFW. This seems fine.
  • My laptop is behind a regular consumer Netgear router with whatever NAT that has.
  • Even with punchy and punch back enabled I can't connect. I can see both the laptop and the lighthouse trying to handshake with each other endlessly. It seems like they are trying to punch back to each other and failing.
  • If I open a firewall port 4242 to my laptop's internal IP things start to work fine. But this kind of defeats the purpose of trying to use this in the first place.

from nebula.

zfwjs avatar zfwjs commented on July 24, 2024

I had a similar issue with a DO lighthouse and two Windows PC's on the same LAN.

I could ping between the lighthouse and PC's, but not between the PC's.

Adding a windows defender firewall rule worked for me as well, even though there were already rules added by nebula.

I didnt add a port rule, instead I added a custom rule with an allow for the network 192.168.100.0/24. I'm using 0 for the port on the nodes.

from nebula.

gebi avatar gebi commented on July 24, 2024

We had similar problems getting nebula to work.
It seems nebula just can't work with "normal" consumer setups (both sides behind NAT).

It's not only me but also 3 collegues that have tried it without success.
The common error pattern was that all boxes can reach the lighthouse via nebula, but except if they are on the same network NO nebula node was able to reach any other nebula node (except the lighthouse).
I've tested it for over 2 weeks from various different networks with my laptop and could not get a connection working to other nebula nodes other than the lighthouse a single time.

Maybe it would be a good idea to adept the readme, that nebula is more for a server use case, because for consumer it seems to not work for the main usecase.

Btw... I had the interesting problem for nebula that most of the machines nebula runs on have the same network (eg. docker or k8s network) which is also displayed in the lighthouse tables and as nebula runs on the host there is also a nebula running there, just the wrong one (it's speaking with himself).
With the config problems mentioned in this thread that i also debugged through i just can't say if this was related to the initial connection problems.

from nebula.

spencerryan avatar spencerryan commented on July 24, 2024

I would agree that while some NAT combo's are nearly impossible, there are many situations that should work that do not. Cisco has figured it out with Meraki's AutoVPN.

I do think having the relay as an option is a good thing, but it shouldn't be necessary in some configs. Per my comment above, the lighthouses should be able to detect if PAT is in use. If it is, you can still make it work without a relay as long as 1 of the 2 ends of a connection are not using PAT (NAT is fine). If both ends are using PAT a relay will be required.

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

Unfortunately I don't have any stats or insight into how well NAT is working across the userbase. The only thing I can say with confidence is that I'm successfully using it myself to connect from home to devices at various locations around the world that are behind NATs themselves, but I'm sure we can do better.

I was going to bring up uPNP/nat-pmp in my original reply but decided against it. Since you've mentioned it, my thoughts for now are: We should do that, too, but the number of people who will benefit from relaying is much higher than the number who will benefit from router-allowed NAT traversal at the moment. It certainly has the upshot of making direct non-relayed tunnels, so is also worth doing, but I'd like to have relays done first.

Out of curiosity have you tested software that uses STUN/ICE/RFCn on those networks where nebula doesn't create a tunnel? I would love to debug why ours wouldn't work if another would, but I don't have any good test setups to reproduce these issues at the moment. I'd also be happy to replicate your setup hardware/software-wise so I can find what we're doing incorrectly with hole punching, if other solutions are doing it without issue.

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

To clarify the above: I totally believe it isn't working for some folks in situations where it should. I just don't have detail on their setups yet, so I haven't been able to replicate and find a root cause.

from nebula.

gebi avatar gebi commented on July 24, 2024

IMHO currently the best example of nat traversal is tailscale, they use a combination of STUN and ICE together with their encrypted relay (DERP).

Awesome... i will re-do the nebula setup get everything up and running again and help you debug if you want :).
Even if we have a quarantine currently i'm sure i will get it to not working between two nodes.

btw... one additional nice feature of a relay would be possible support for http proxy (as many corps still use a proxy for internet access).

ps.: should i create an issue with the problem of ip collission i found with the presence of the docker network on mutliple nebula nodes and nebula listening on both nodes on the "same" address? i've "fixed" it partly through firewall rules and different nebula ports on each nodes, which might be an uncommon config for newcommers.

from nebula.

breisig avatar breisig commented on July 24, 2024

That's one major reason why I stuck with Tinc VPN. It's a mesh VPN that will route traffic through other Tinc nodes if it can't do it directly. Once Nebula has that feature, then I would switch completely.

from nebula.

rawdigits avatar rawdigits commented on July 24, 2024

@breisig I used Tinc for many years and still think it is great. It definitely inspired some of Nebula. Now that I'm a full time indoors person, I'm typing code as fast as I can, so we'll have something to test soon. :)

from nebula.

breisig avatar breisig commented on July 24, 2024

@rawdigits Once you have something ready for testing that would allow Nebula to route traffic through other nodes [like Tinc], please let me know. I would be willing to test. I would drop Tinc right away for Nebula.

from nebula.

gebi avatar gebi commented on July 24, 2024

Awesome, i'll also test as soon as we are allowed to go out again.

btw... as it seems now viable to use nebula i've polished up my debian package building and sent a pull request :) #211

from nebula.

rakundig avatar rakundig commented on July 24, 2024

Update: I haven't had time to test further. However, I wanted to point out that my laptop behind an iPhone tether isn't symmetric NAT, so that little test doesn't prove or disprove that it does or doesn't defeat that issue. that said, it is an overall improvement as it did improve the scenario described above.

Good luck and I hope that further testing proves this is a fix and move the whole project forward.

@gebi why can't you test without "going out?"

from nebula.

mismacku avatar mismacku commented on July 24, 2024

First off... Nebula is awesome and I appreciate having the privilege of using it. Thank you!

I'm experiencing this same issue using v1.2 and built from source commit 363c836, but would like to note that I don't see the same issue on OS X.

  • lighthouse running on Linode
  • linux server A behind Google WiFi NAT
  • MacBook behind Google WiFi Nat
  • linux server B behind unknown NAT

Connectivity:

  • lighthouse can reach all machines
  • all machines can reach lighthouse
  • server A <-> Macbook = OK (same LAN)
  • server B <-> Macbook = OK (not on the same LAN)
  • server A <-> server B = FAIL (not on the same LAN)

When I ping server A <-> server B nebula logs nebula[31848]: time="2020-04-21T15:33:48-07:00" level=info msg="Handshake message sent" endlessly, but traffic never arrives.
I've turned numerous knobs, but can't seem to get it to work. Any help is appreciated!

Macbook config:

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "172.16.0.1"

listen:
  host: 0.0.0.0
  port: 0

punchy:
  punch: true

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: debug
  format: text

firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

    - port: 443
      proto: tcp
      groups:
        - laptop
        - home

Server A and server B config:

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "172.16.0.1"

  local_allow_list:

listen:
  host: 10.137.124.217
  port: 0

punchy:
  punch: true
  respond: true
  delay: 1s

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: debug
  format: text

handshakes:
  try_interval: 100ms
  retries: 20
  wait_rotation: 5

firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: any
      host: any

lighthouse config:

lighthouse:
  am_lighthouse: true
  interval: 60
  hosts:

listen:
  host: 0.0.0.0
  port: 4242

punchy:
  punch: true
  respond: true

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: debug
  format: text

firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: any
      host: any

from nebula.

iamid0 avatar iamid0 commented on July 24, 2024

First off... Nebula is awesome and I appreciate having the privilege of using it. Thank you!

I'm experiencing this same issue using v1.2 and built from source commit 363c836, but would like to note that I don't see the same issue on OS X.

  • lighthouse running on Linode
  • linux server A behind Google WiFi NAT
  • MacBook behind Google WiFi Nat
  • linux server B behind unknown NAT

Connectivity:

  • lighthouse can reach all machines
  • all machines can reach lighthouse
  • server A <-> Macbook = OK (same LAN)
  • server B <-> Macbook = OK (not on the same LAN)
  • server A <-> server B = FAIL (not on the same LAN)

When I ping server A <-> server B nebula logs nebula[31848]: time="2020-04-21T15:33:48-07:00" level=info msg="Handshake message sent" endlessly, but traffic never arrives.
I've turned numerous knobs, but can't seem to get it to work. Any help is appreciated!

Macbook config:

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "172.16.0.1"

listen:
  host: 0.0.0.0
  port: 0

punchy:
  punch: true

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: debug
  format: text

firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

    - port: 443
      proto: tcp
      groups:
        - laptop
        - home

Server A and server B config:

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "172.16.0.1"

  local_allow_list:

listen:
  host: 10.137.124.217
  port: 0

punchy:
  punch: true
  respond: true
  delay: 1s

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: debug
  format: text

handshakes:
  try_interval: 100ms
  retries: 20
  wait_rotation: 5

firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: any
      host: any

lighthouse config:

lighthouse:
  am_lighthouse: true
  interval: 60
  hosts:

listen:
  host: 0.0.0.0
  port: 4242

punchy:
  punch: true
  respond: true

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: debug
  format: text

firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: any
      host: any

I have got a similar issue. Nebula may fail if there is NAT or there are multi-NAT to be punched.

from nebula.

hasturo avatar hasturo commented on July 24, 2024

Hi,

im trying to find out if there is any Support for Routing Traffic, between nodes which cant reach each other, through the Lighthouse Host. From my understanding a Gateway doing Port Based Address Translation never has a working Session Table Entry which dont reflect the Communication between the Nebula Host and a Lighthouse.

Even if your able to fake a Session between both Gateways, using Lighthouse for Signaling you dont get the right source/destinations Ports. Simply because they are random.

-hasturo

from nebula.

butterl avatar butterl commented on July 24, 2024

this also happen to me with newest release(v1.2.0) with the example config(also tried open respond:true )

Node A ---> ping --> LH == OK
Node B ---> ping --> LH == OK
Node A --> ping --> Node B == Nope
Node B --> ping --> Node A == Nope

I used a raspberry Pi as Node B and Node A is in my office. the Node A<-> Node B works only when I connect the PI to office network(under same router)

when I bring the Pi home,the issue happend, any avalible resolution to this or I just need to build from source?

from nebula.

cellinix3m avatar cellinix3m commented on July 24, 2024

I have the same problem at my end, i have two networks (mostly for testing) until i can get it running reliable, home and work network, both of the firewalls i have full control over and both running FreeBSD with PF as the firewall.

even stripped down the config for pf to the absolute minimum without any result. by the way i am running the latest version built for Windows and FreeBSD.

I can connect on the same network but at soon as I am home or tethering with my android phone, i get no response from any machine behind NAT.

from nebula.

spencerryan avatar spencerryan commented on July 24, 2024

Hey y'all! Of anyone having hole punching issues, I'm trying to gauge how many of you allow unsolicited inbound UDP on the router doing the NAT? (Don't feel bad, I did too!) Please react to this message with a thumbs up if you allow all UDP in to your router from the internet and thumbs down if you don't.

Even if you did allow unsolicited UDP, unless the host behind the NAT was 1:1 NAT'd the NAT device would drop the inbound UDP anyway since it has no idea what host to map it to.

from nebula.

sndyuk avatar sndyuk commented on July 24, 2024

@rawdigits I'm facing the issue related to the NAT. Do you have any idea why it happens? Let me know if you need more details.

OK case:

Each machines can connect each other.
list-hostmap responses:

192.168.102.16: [106.72.131.222:27898]
192.168.102.31: [1.115.43.191:17000 1.115.43.191:1024]
192.168.102.40: [1.115.43.191:1027]

*1.115.43.191 = NAT(EC2 instance, it's NOT a NAT gateway)
*192.168.102.31 and 192.168.102.40 = EC2 instances behind the NAT and same AWS local network as NAT.
*192.168.102.16 = My host machine at home network
*The lighthouse is located at the same local network as NAT
*Public address are not real ones.

NG case:

Machine 192.168.102.16 cannot connect to 192.168.102.40 or/and* 192.168.102.31 vice versa. (* Sometimes it cannot connect to one of them, sometimes all of them)
At this case, list-hostmap responses becomes like:

192.168.102.16: [106.72.131.222:27898]
192.168.102.31: [1.115.43.191:17000 1.115.43.191:1024]
192.168.102.40: [1.115.43.191:1024]

In this case, 192.168.102.31 and 192.168.102.40 are mapping on the same address:port.
I'm not sure it's always or not when the issue happens. I'm just guessing it's the cause of the issue.

from nebula.

tarrenj avatar tarrenj commented on July 24, 2024

Relays would be awesome. Is there some unsafe_routes hack we can try in the meantime?

I've been doing some testing with this. I've had decent success running a VPN (with client-to-client enabled) on the lighthouse and connecting each node. They attempt to build a direct p2p connection between themselves (preferred_ranges) and use the VPN connection as a sort of "backup". This is very manual and does not support changing "client networks" or moving routes at all.

from nebula.

numinit avatar numinit commented on July 24, 2024

Yeah, I'm currently probably going to get around this by configuring routing with iptables on my lighthouse and using unsafe_routes. Or something like that. There's some significant value in being able to route through a lighthouse when you're on a network you can't totally control (in-flight wifi, for instance). Some NAT situations are just too bad; this is the only thing really preventing me from totally switching from Tinc to Nebula.

from nebula.

tarrenj avatar tarrenj commented on July 24, 2024

I'm in the same boat as you. Integration with some dynamic routing protocol is the only thing preventing me from fully adopting this. Honestly, even just relaying through the lighthouse would get me most of the way there. Nebula could add a high metric route to all nodes via the lighthouse and enable forwarding on the lighthouse node...

from nebula.

stilsch avatar stilsch commented on July 24, 2024

I have a lighthouse in Digital Ocean and two version 1.5 clients behind OPNSense on separate networks. I configured the static source ports for outbound NAT in OPNsense as suggested in the previous post. I also needed to set port: 0 on both non-lighthouse nodes. With both changes, everything began to work properly.

Last issue first: I recommend trying a random port (instead of choosing a numbered port that is identical on every node) by using port: 0 in the config. That's how it is used in kubernetes in a few places, to avoid reusing a single port number. This is also how i run it on devices behind nat to improve the chances they don't overlap and have to be reassigned a new NAT'd port. (perhaps a thing for you to try as well)
(TBH, I'd just make every non-lighthouse node on any nebula network port: 0 unless you have a restrictive network)

Having the same issue I also tried setting the static outbound Port on OPNsense and setting the non-lighthouses to port:0 - without luck. :/

from nebula.

tarrenj avatar tarrenj commented on July 24, 2024

@schuft69 Are your nodes able to connect to the lighthouse? If so, you may just need to statically set a port for each extra node and then open those up on OPNsense.

from nebula.

stilsch avatar stilsch commented on July 24, 2024

@schuft69 Are your nodes able to connect to the lighthouse? If so, you may just need to statically set a port for each extra node and then open those up on OPNsense.

-> setting everything up with static ports + dyndns is working quite well. <-

I was hoping to get rid of static ports with nebula (which I have now with wireguard). The hole-punching (from lighthouse on a 1€ droplet at strato.de) is neither working on the FritzBox (where I have devices on my parents home) and not on my home (OPNSense - maybe because disabling UDP port rewriting like written earlier is not working somehow (have to ask at the OPNsense conmunity..))).
So the main benefit is that I don't need to bother with iptables (to secure the endpoints) with wireguard anymore - which at least is also a win.

from nebula.

tcurdt avatar tcurdt commented on July 24, 2024

In this workshop video it sounds like the NAT-to-NAT traversal is supported. But here it sounds like NAT is still as messy as it always has been. What the status on UPnP/NAT-PMP support?

from nebula.

tcurdt avatar tcurdt commented on July 24, 2024

I am really confused on what nebula supports or does not support in regards to NAT traversal. From reading the comments it sounds like this:

Let's say I have a lighthouse and two networks behind NATs.

network

I assume:

  • the nebula instance on machine1 allows network participants to reach printer1 (unsafe_routes)
  • the nebula instance on machine2 allows network participants to reach printer2 (unsafe_routes)
  • machine1 can reach machine2 (because port forwarding to machine2 is setup for NAT2)
  • machine2 can reach machine1 only with port forwarding also setup for NAT1
  • machine1 can reach printer 2 (because port forwarding to machine2 is setup for NAT2)
  • machine2 can reach printer 1 only with port forwarding also setup for NAT1
  • forwarding the port to a single nebula instance will make all nebula instances behind the NAT accessible
  • nebula does not support UPnP/NAT-PMP and requires manual port forwarding
  • dyndns is not required as the lighthouse knows about the external IPs of the NATs

Are these assumption correct?

from nebula.

tarrenj avatar tarrenj commented on July 24, 2024
  • the nebula instance on machine1 allows network participants to reach printer1 (unsafe_routes)

Sort of, but not really. Machine1 needs the network the printer is on specified within its cert (--submnets argument), and the unsafe_routes entry needs to be made at every OTHER nebula instance that you want to connect to printer1 (machine2 and lighthouse).

  • machine1 can reach machine2 (because port forwarding to machine2 is setup for NAT2)
  • machine2 can reach machine1 only with port forwarding also setup for NAT1

Yes, that should be the case

  • forwarding the port to a single nebula instance will make all nebula instances behind the NAT accessible

No, Nebula does not have a "proxy", "routing" or "connection hopping" mechanism built in.

  • nebula does not support UPnP/NAT-PMP and requires manual port forwarding

PMP is not supported (but there's a PR to add it!) and UPnP should work.

  • dyndns is not required as the lighthouse knows about the external IPs of the NATs

Correct

from nebula.

tcurdt avatar tcurdt commented on July 24, 2024

Thanks for the help, @tarrenj

the nebula instance on machine1 allows network participants to reach printer1 (unsafe_routes)

Sort of, but not really. Machine1 needs the network the printer is on specified within its cert (--submnets argument), and the unsafe_routes entry needs to be made at every OTHER nebula instance that you want to connect to printer1 (machine2 and lighthouse).

So with --subnets I'd pass in the network that is behind NAT2 for the cert of machine1
So one would have to re-generate a cert to give access to another network.

Where I am still a bit lost is the "OTHER".
Why would the lighthouse reach the printer with unsafe_routes,
but machine1 needs to have the network as part of the cert?

And machine2 should have a local LAN connection to the printer on another interface.
Shouldn't it be able to reach the printer even without unsafe_routes?

forwarding the port to a single nebula instance will make all nebula instances behind the NAT accessible

No, Nebula does not have a "proxy", "routing" or "connection hopping" mechanism built in.

So that means I would have to open a port to every nebula instance?!

nebula does not support UPnP/NAT-PMP and requires manual port forwarding

PMP is not supported (but there's a PR to add it!) and UPnP should work.

Found it! #148

from nebula.

tarrenj avatar tarrenj commented on July 24, 2024

So with --subnets I'd pass in the network that is behind NAT2 for the cert of machine1 So one would have to re-generate a cert to give access to another network.

No, the opposite. You'd use the network printer1 is on when creating the cert for machine1, and the network printer2 is on when creating the cert for machine2. Certs are all about trust. When you generate a cert on machine1 with subnet n specified, you then have to have it signed by the CA (which all other nodes trust). This effectively tells all other nodes "According to the CA (which you already trust), machine1 is allowed to relay traffic to network n" Generating and signing the machine1 cert with the --subnet n argument basically grants machine1 "permission" to route traffic to that unsafe network.

Where I am still a bit lost is the "OTHER". Why would the lighthouse reach the printer with unsafe_routes, but machine1 needs to have the network as part of the cert?

And machine2 should have a local LAN connection to the printer on another interface. Shouldn't it be able to reach the printer even without unsafe_routes?

Adding an unsafe_route entry to the lighthouse is only required if the lighthouse needs to access the unsafe network.

The unsafe_route entries tell the local node to accept incoming traffic destined for network n, and send it to machine1 via the overlay network. Again this goes back to trust: Doing it the other way around (configuring unsafe_routes on the node that's doing the routing - they way I believe you expected it to work) would mean that my local nodes configuration is changed based on the actions of a remote node admin. What would prevent them from simply saying "Get to the WAN through me!" and then MITMing all traffic from all nodes?

So that means I would have to open a port to every nebula instance?!

Nebula assumes that each node is able to establish a direct connection with each other node (using NAT hole punching through UPnP). Machine1 would not be able to access machine2 by connecting "through" the lighthouse, in your above example.

from nebula.

tcurdt avatar tcurdt commented on July 24, 2024

So to summarise: The machine1 cert would be signed for nat1, the machine2 cert would be signed for nat2 - that defines their trust relationship as "exit node" into the LAN. And specifying the unsafe_route defines the rout-ability of the traffic through the overlay. But the unsafe_route part I will figure out - I don't want to hijack the issue for these details.

I guess the real important information in the context of this issue is that every nebula instance must be directly reachable through the NAT - so is requiring a punched/forwarded port. I didn't expect that. Thanks for clearing this up!

from nebula.

shantivana avatar shantivana commented on July 24, 2024

Thanks for this conversation. I might have found another way after a bunch of trial and error. I had two laptops inside my regular home network behind a NAT that could not connect to a server on another network also behind a NAT. The server network had a lighthouse with perimeter firewall rule as a lighthouse should, to which the laptops could PING but they could not the server endpoint.

Solution/workaround that worked for me: On the laptops, create an entry in the config.yml for the server endpoint as though it's a lighthouse (even though it's not actually a lighthouse), and alongside the lighthouse "hosts" entry in the config.yml. Put the external network IP and port for the other endpoint, even if there is no perimeter firewall rule to it. In my case the lighthouse has port 4242 and the server endpoint was a different port (not sure if that is necessary).

Note: I did NOT need to do anything with unsafe_route that was discussed here and I was unable to make progress with the OPNsense outbound NAT rule, although I did try to create that rule, since one of the firewalls is pfSense, but ended up removing it.

My theory it works because the server endpoint's external network has an actual lighthouse, so the laptop client knows how to reach that network, meaning that the laptop client associates the server's Nebula IP address with the external IP of the lighthouse being on the same network. The actual lighthouse knows how to get to the server endpoint and provides the path once the Nebula connection is established.

Note: It's possible that some of my other troubleshooting put some temporary route that stuck, but I don't think so because by removing the workaround entry in config.yml it stopped working again, therefore reproducible. Good luck!

from nebula.

sfxworks avatar sfxworks commented on July 24, 2024

Something I've noticed in using this, I've had to lower mtu a bit from the original 1300. Not sure if it's because another end of mine doing 1:1 nat or if it's relay related. Other than that no problems.

╭─ ~ ▓▒░──────────────────────────────────────────────────░▒▓ ✔  at 21:20:39 ─╮
╰─ ping 192.168.32.4 -s 1216                                                    ─╯
PING 192.168.32.4 (192.168.32.4) 1216(1244) bytes of data.
1224 bytes from 192.168.32.4: icmp_seq=1 ttl=64 time=26.6 ms
1224 bytes from 192.168.32.4: icmp_seq=2 ttl=64 time=26.0 ms
1224 bytes from 192.168.32.4: icmp_seq=3 ttl=64 time=25.2 ms
^C
--- 192.168.32.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 25.186/25.913/26.592/0.575 ms

╭─ ~ ▓▒░──────────────────────────────────────────────────░▒▓ ✔  at 21:20:43 ─╮
╰─ ping 192.168.32.4 -s 1217                                                    ─╯
PING 192.168.32.4 (192.168.32.4) 1217(1245) bytes of data.
^C
--- 192.168.32.4 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2039ms

from nebula.

brad-defined avatar brad-defined commented on July 24, 2024

@sfxworks thanks for the feedback! You're spot on - when relaying, Nebula sticks additional headers onto the packets, which will impact the MTU.

from nebula.

noseshimself avatar noseshimself commented on July 24, 2024

Nebula 1.6.0 is released with a Relay feature, to cover cases like a Symmetric NAT.

There is still a special case needing attention (or yet another type of node) which I don't get my head wrapped around: Gateways between two or more meshes. A server wiht several instances of nebula running on different addresse and/or ports could act as a relay node between them, permitting segmentation between the equivalent of VLANs. But that would probably also require some separated DNS service that can be shared among the meshes.

from nebula.

sfxworks avatar sfxworks commented on July 24, 2024

Something I noticed when dealing with more MTU issues:

  mtu: 1200
  # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
  routes:
    #- mtu: 8800
    #  route: 10.0.0.0/16
  # Unsafe routes allows you to route traffic over nebula to non-nebula nodes
  # Unsafe routes should be avoided unless you have hosts/services that cannot run nebula
  # NOTE: The nebula certificate of the "via" node *MUST* have the "route" defined as a subnet in its certificate
  # `mtu` will default to tun mtu if this option is not specified
  # `metric` will default to 0 if this option is not specified
  unsafe_routes:
   - route: 192.168.8.0/23
     via: 192.168.32.5
     mtu: 1300

Even with one unsafe route specified as 1300, the entire tunnel was configured to be 1300 instead of 1200. So while I could reach my unsafe route area,

14: nebula1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1200 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 192.168.32.6/19 scope global nebula1
       valid_lft forever preferred_lft forever

I could not reach my office server that required the mtu of 1200 even though the default was set to 1200.
Removing that field or setting it to 1200 worked fine. Didn't test it as a lesser value for that route as that wasn't needed.

from nebula.

sfxworks avatar sfxworks commented on July 24, 2024

Wait no, that's not the issue I was having. Logs from home PC:

sg="Attempt to relay through hosts" relayIps="[192.168.32.1 192.167.32.7]" vpnIp=192.168.32.4
sg="Re-send CreateRelay request" relay=192.168.32.1 vpnIp=192.168.32.4
sg="Establish tunnel to relay target." error="unable to find host" relay=192.167.32.7 vpnIp=192.168.32.4
sg=handleCreateRelayResponse hostInfo=192.168.32.1 initiatorIdx=3445190669 relayFrom=192.168.32.6 relayTarget=192.168.32.4 responderIdx=4235128187
sg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1462476829 udpAddrs="[10.0.0.231:4242 192.168.67.32:4242]" vpnIp=192.168.32.4
sg="Attempt to relay through hosts" relayIps="[192.168.32.1 192.167.32.7]" vpnIp=192.168.32.4
sg="Send handshake via relay" relay=192.168.32.1 vpnIp=192.168.32.4
sg="Establish tunnel to relay target." error="unable to find host" relay=192.167.32.7 vpnIp=192.168.32.4
sg="Handshake message received" certName=office-server-1 durationNs=424617429 fingerprint=4cdb758bbdf9130f18d0be3994fd79c0966ddc9d5364ec63d1afc5c24a6f74df handshake="map[stage>
sg="Tunnel status" certName=office-server-1 tunnelCheck="map[method:active state:dead]" vpnIp=192.168.32.4

I can ping 192.168.32.7 from my Home PC just fine. So, not sure why I'm getting unable to find host

This occurs on occasion with my config. I have two relay hosts based on two lighthouses with public IPs. These also act as routers for their respective zones.

So I have

Home PC (192.168.32.6) and Office Server (192.168.32.4)

relay:
  relays:
    - 192.168.32.1
    - 192.168.32.7
  am_relay: false
use_relays: true

With Home Router (192.168.32.7)

relay:
  relays:
   - 192.168.32.1
  am_relay: true
use_relays: true

And Office Router (192.168.32.1)

relay:
  relays:
    - 192.168.32.7
  am_relay: true
use_relays: true

The thing is, after either a systemctl restart and/or some time, the issue resolves itself and I can reach my office server again. It's intermittent. Is one of my relays just bad, or is this somehow the wrong way to set this up?


Edit:
I added a third node outside of the other two networks. They seem to work ok like that. Though this third node is in another network in which I wished to add soon. So I'm worried I may run into the same problem.

Edit 2:
Just now seeing
"You can't relay to a Relay"
So I wonder if this is related

from nebula.

brad-defined avatar brad-defined commented on July 24, 2024

@sfxworks unable to find host is a misleading message - it means that the Relay doesn't have a direct connection to that host at that time.
When that happens, the Relay will attempt to establish a direct connection to the target host. If the Relay receives another CreateRelayRequest message after it's successfully established a direct connection to the target host, it'll be able to successfully complete the relay connection.
When tunneling through a Relay, Nebula will include an extra header (16 bytes) and an extra AEAD signature (16 bytes.) So Relays add 32 bytes in total to your existing Nebula traffic.

from nebula.

brad-defined avatar brad-defined commented on July 24, 2024

@noseshimself I think the Nebula way to join two Nebula networks together is to run multiple instances of Nebula on all hosts joined to both networks, rather than on one Gateway host to join the networks.

With direct connections between the peers, you get all the identity fidelity and corresponding firewall rules. If hosts are joined by an intermediary, their identity is lost - you will only have the identity of the gateway host, not the identity of the peer.

That being said, I think the existing unsafe routes feature would accomplish what you described. (It's called unsafe due to the loss of identity information of the connection.)

from nebula.

noseshimself avatar noseshimself commented on July 24, 2024

I think the Nebula way to join two Nebula networks together is to run multiple instances of Nebula on all hosts joined to both networks, rather than on one Gateway host to join the networks.

I prefer doing packet filtering on dedicated systems. Imagine having a set of server systems that are supposed to be reachable by "accounting" and "thieves" and I don't want the thieves to be able to access the systems in the accounting network while not trusting the administrators of the servers either (but trusting the networking staff due to being under my control). I could of course trust the Nebula certificates taking care of that but I don't know if $asshole-from-thieves would install a modified client removing the restriction.

from nebula.

johnmaguire avatar johnmaguire commented on July 24, 2024

Hi all! There's a lot of questions, answers, and information in this thread, but it's gotten a bit hard to follow.

We believe that the relay feature should be sufficient for most tricky NAT scenarios. As such, I'm going to close this issue out as solved. If you're continuing to experience connectivity issues, please feel free to open up a new issue or join us on Slack. Thanks!

from nebula.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.