Comments (25)
Hi All,
This is almost always a NAT or two that aren't playing nice. Sorry for the delay replying!
A week ago, when the core devs had a meeting, we spec'd out the way we are going to do relays, at the protocol level. While this wasn't a design goal of Nebula, there is enough need in the community that it is worth doing.
from nebula.
Same issue here.
I've got one lighthouse node with public ip addr, firewall open port 4242 for udp&tcp;
one laptop at home behind nat; one laptop at office(also behind nat).The connection status would be:
lighthouse <-both way connected-> home laptop
lighthouse <-both way connected-> office laptop
home laptop <-no connection-> office laptop
from nebula.
Sure
cat config.yml |sed '/#/d'|sed -r '/^\s*$/d'
pki:
ca: ./ca.crt
cert: ./ROOT.crt
key: ./ROOT.key
static_host_map:
"10.1.0.1": ["IP:4545"]
lighthouse:
am_lighthouse: true
interval: 60
hosts:
listen:
host: 0.0.0.0
port: 4545
punchy: true
punch_back: true
tun:
dev: nebula1
drop_local_broadcast: false
drop_multicast: false
tx_queue: 500
mtu: 1300
routes:
logging:
level: info
format: text
firewall:
conntrack:
tcp_timeout: 120h
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000
outbound:
- port: any
proto: any
host: any
inbound:
- port: any
proto: icmp
host: any
- port: 443
proto: tcp
groups:
- laptop
- home
from nebula.
So I've been slowly expanding out my nebula network to machines, and haven't had many issues. I invited a friend in to my network and generated some certs for 4 of his machines.
So far he has 3 running, and the results are very odd. 2 of the machines are at his house, and 1 machine is at his work. 1 machine at his house is windows, 1 machine is ubuntu, and the machine at his work is also Windows.
The machine I'm currently on is a Windows machine at my house.
This windows machine at my house is able to ping and connect to his ubuntu machine at his house, but not to the windows machine at his house or work. He is able to ping his house ubuntu machine from work but not the house windows machine at work.
Here's the logs from the lighthouse
128.0.3.2 is the windows machine at his house
128.0.3.3 is the ubuntu machine at his house
128.0.3.4 is the windows machine at his work
terry@cloudlink:/var/log$ cat syslog | grep nebula | grep 128.0.3.2
Jan 14 02:50:12 cloudlink nebula[29594]: time="2020-01-14T02:50:12Z" level=info msg="Handshake message received" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=681713942 remoteIndex=0 responderIndex=0 udpAddr="104.XXX.XXX.84:55939" vpnIp=128.0.3.2
Jan 14 02:50:12 cloudlink nebula[29594]: time="2020-01-14T02:50:12Z" level=info msg="Handshake message sent" handshake="map[stage:2 style:ix_psk0]" initiatorIndex=681713942 remoteIndex=0 responderIndex=3524367104 udpAddr="104.XXX.XXX.84:55939" vpnIp=128.0.3.2
Jan 14 03:10:57 cloudlink nebula[29594]: time="2020-01-14T03:10:57Z" level=info msg="Close tunnel received, tearing down." udpAddr="104.XXX.XXX.84:55939" vpnIp=128.0.3.2
Jan 14 03:11:09 cloudlink nebula[29594]: time="2020-01-14T03:11:09Z" level=info msg="Handshake message received" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3980494719 remoteIndex=0 responderIndex=0 udpAddr="104.XXX.XXX.84:58401" vpnIp=128.0.3.2
Jan 14 03:11:09 cloudlink nebula[29594]: time="2020-01-14T03:11:09Z" level=info msg="Handshake message sent" handshake="map[stage:2 style:ix_psk0]" initiatorIndex=3980494719 remoteIndex=0 responderIndex=3416636504 udpAddr="104.XXX.XXX.84:58401" vpnIp=128.0.3.2
terry@cloudlink:/var/log$ cat syslog | grep nebula | grep 128.0.3.3
Jan 14 02:50:13 cloudlink nebula[29594]: time="2020-01-14T02:50:13Z" level=info msg="Handshake message received" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=725317506 remoteIndex=0 responderIndex=0 udpAddr="104.XXX.XXX.84:4242" vpnIp=128.0.3.3
Jan 14 02:50:13 cloudlink nebula[29594]: time="2020-01-14T02:50:13Z" level=info msg="Handshake message sent" handshake="map[stage:2 style:ix_psk0]" initiatorIndex=725317506 remoteIndex=0 responderIndex=1799649400 udpAddr="104.XXX.XXX.84:4242" vpnIp=128.0.3.3
terry@cloudlink:/var/log$ cat syslog | grep nebula | grep 128.0.3.4
Jan 14 02:50:37 cloudlink nebula[29594]: time="2020-01-14T02:50:37Z" level=info msg="Handshake message received" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=156029206 remoteIndex=0 responderIndex=0 udpAddr="198.XX.XXX.42:60679" vpnIp=128.0.3.4
Jan 14 02:50:37 cloudlink nebula[29594]: time="2020-01-14T02:50:37Z" level=info msg="Handshake message sent" handshake="map[stage:2 style:ix_psk0]" initiatorIndex=156029206 remoteIndex=0 responderIndex=1379347581 udpAddr="198.XX.XXX.42:60679" vpnIp=128.0.3.4
This 128.0.1.3 is the windows machine I'm currently on
terry@cloudlink:/var/log$ cat syslog | grep nebula | grep 128.0.1.3
Jan 14 03:28:39 cloudlink nebula[29594]: time="2020-01-14T03:28:39Z" level=info msg="Handshake message received" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=3413707116 remoteIndex=0 responderIndex=0 udpAddr="184.61.217.140:59919" vpnIp=128.0.1.3
Jan 14 03:28:39 cloudlink nebula[29594]: time="2020-01-14T03:28:39Z" level=info msg="Handshake message sent" handshake="map[stage:2 style:ix_psk0]" initiatorIndex=3413707116 remoteIndex=0 responderIndex=195554109 udpAddr="184.61.217.140:59919" vpnIp=128.0.1.3
Jan 14 03:28:39 cloudlink nebula[29594]: time="2020-01-14T03:28:39Z" level=info msg="Handshake message sent" cached=true handshake="map[stage:2 style:ix_psk0]" udpAddr="184.61.217.140:59919" vpnIp=128.0.1.3
The config file for his linux machine is here
# This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
# Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
# PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
pki:
# The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
ca: /etc/nebula/ca.crt
cert: /etc/nebula/plex-linux-vm.crt
key: /etc/nebula/plex-linux-vm.key
#blacklist is a list of certificate fingerprints that we will refuse to talk to
#blacklist:
# - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
# The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
# A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
# The syntax is:
# "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
# Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
static_host_map:
"128.0.0.1": ["108.XXX.XXX.147:4242"]
"128.0.0.2": ["35.XXX.XX.50:4242"]
lighthouse:
# am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
# you have configured to be lighthouses in your network
am_lighthouse: false
# serve_dns optionally starts a dns listener that responds to various queries and can even be
# delegated to for resolution
#serve_dns: false
# interval is the number of seconds between updates from this node to a lighthouse.
# during updates, a node sends information about its current IP addresses to each node.
interval: 60
# hosts is a list of lighthouse hosts this node should report to and query from
# IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
hosts:
- "128.0.0.1"
- "128.0.0.2"
# Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
# however using port 0 will dynamically assign a port and is recommended for roaming nodes.
listen:
host: 0.0.0.0
port: 0
# Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
# default is 64, does not support reload
#batch: 64
# Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
# Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
# Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
# max, net.core.rmem_max and net.core.wmem_max
#read_buffer: 10485760
#write_buffer: 10485760
# Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
punchy: true
# punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
# this is extremely useful if one node is behind a difficult nat, such as symmetric
punch_back: true
# Cipher allows you to choose between the available ciphers for your network.
# IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
#cipher: chachapoly
# Local range is used to define a hint about the local network range, which speeds up discovering the fastest
# path to a network adjacent nebula node.
#local_range: "172.16.0.0/24"
# sshd can expose informational and administrative functions via ssh this is a
#sshd:
# Toggles the feature
# enabled: true
# Host and port to listen on, port 22 is not allowed for your safety
# listen: 127.0.0.1:477
# A file containing the ssh host private key to use
# A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
# host_key: /home/terry/.ssh/ssh_host_ed25519_key
# A file containing a list of authorized public keys
# authorized_users:
# - user: terry
# keys can be an array of strings or single string
# keys:
# -
# Configure the private interface. Note: addr is baked into the nebula certificate
tun:
# Name of the device
dev: xoverlay
# Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
drop_local_broadcast: false
# Toggles forwarding of multicast packets
drop_multicast: false
# Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
tx_queue: 500
# Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
mtu: 1300
# Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
routes:
#- mtu: 8800
# route: 10.0.0.0/16
# TODO
# Configure logging level
logging:
# panic, fatal, error, warning, info, or debug. Default is info
level: info
# json or text formats currently available. Default is text
format: text
#stats:
#type: graphite
#prefix: nebula
#protocol: tcp
#host: 127.0.0.1:9999
#interval: 10s
#type: prometheus
#listen: 127.0.0.1:8080
#path: /metrics
#namespace: prometheusns
#subsystem: nebula
#interval: 10s
# Nebula security group configuration
firewall:
conntrack:
tcp_timeout: 120h
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000
# The firewall is default deny. There is no way to write a deny rule.
# Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
# Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
# - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
# code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
# proto: `any`, `tcp`, `udp`, or `icmp`
# host: `any` or a literal hostname, ie `test-host`
# group: `any` or a literal group name, ie `default-group`
# groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
# cidr: a CIDR, `0.0.0.0/0` is any.
# ca_name: An issuing CA name
# ca_sha: An issuing CA shasum
outbound:
# Allow all outbound traffic from this node
- port: any
proto: any
host: any
inbound:
# Allow icmp between any nebula hosts
- port: any
proto: icmp
host: any
# Allow tcp/443 from any host with chris group
- port: 443
proto: tcp
group: chris
And here is his Windows config file
# This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
# Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)
# PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
pki:
# The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
ca: C:\\Windows\\System32\\Nebula\\ca.crt
cert: C:\\Windows\\System32\\Nebula\\hoyane-win-svr.crt
key: C:\\Windows\\System32\\Nebula\\hoyane-win-svr.key
#blacklist is a list of certificate fingerprints that we will refuse to talk to
#blacklist:
# - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
# The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
# A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
# The syntax is:
# "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
# Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
static_host_map:
"128.0.0.1": ["108.XXX.XXX.147:4242"]
"128.0.0.2": ["35.XXX.XX.50:4242"]
lighthouse:
# am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
# you have configured to be lighthouses in your network
am_lighthouse: false
# serve_dns optionally starts a dns listener that responds to various queries and can even be
# delegated to for resolution
#serve_dns: false
# interval is the number of seconds between updates from this node to a lighthouse.
# during updates, a node sends information about its current IP addresses to each node.
interval: 60
# hosts is a list of lighthouse hosts this node should report to and query from
# IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
hosts:
- "128.0.0.1"
- "128.0.0.2"
# Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
# however using port 0 will dynamically assign a port and is recommended for roaming nodes.
listen:
host: 0.0.0.0
port: 0
# Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
# default is 64, does not support reload
#batch: 64
# Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
# Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
# Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
# max, net.core.rmem_max and net.core.wmem_max
#read_buffer: 10485760
#write_buffer: 10485760
# Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
punchy: true
# punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
# this is extremely useful if one node is behind a difficult nat, such as symmetric
punch_back: true
# Cipher allows you to choose between the available ciphers for your network.
# IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
#cipher: chachapoly
# Local range is used to define a hint about the local network range, which speeds up discovering the fastest
# path to a network adjacent nebula node.
#local_range: "172.16.0.0/24"
# sshd can expose informational and administrative functions via ssh this is a
#sshd:
# Toggles the feature
# enabled: true
# Host and port to listen on, port 22 is not allowed for your safety
# listen: 127.0.0.1:
# A file containing the ssh host private key to use
# A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
# host_key: /home/terry/.ssh/ssh_host_ed25519_key
# A file containing a list of authorized public keys
# authorized_users:
# - user: terry
# keys can be an array of strings or single string
# keys:
# - "
# Configure the private interface. Note: addr is baked into the nebula certificate
tun:
# Name of the device
dev: xoverlay
# Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
drop_local_broadcast: false
# Toggles forwarding of multicast packets
drop_multicast: false
# Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
tx_queue: 500
# Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
mtu: 1300
# Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
routes:
#- mtu: 8800
# route: 10.0.0.0/16
# TODO
# Configure logging level
logging:
# panic, fatal, error, warning, info, or debug. Default is info
level: info
# json or text formats currently available. Default is text
format: text
#stats:
#type: graphite
#prefix: nebula
#protocol: tcp
#host: 127.0.0.1:9999
#interval: 10s
#type: prometheus
#listen: 127.0.0.1:8080
#path: /metrics
#namespace: prometheusns
#subsystem: nebula
#interval: 10s
# Nebula security group configuration
firewall:
conntrack:
tcp_timeout: 120h
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000
# The firewall is default deny. There is no way to write a deny rule.
# Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
# Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
# - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
# code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
# proto: `any`, `tcp`, `udp`, or `icmp`
# host: `any` or a literal hostname, ie `test-host`
# group: `any` or a literal group name, ie `default-group`
# groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
# cidr: a CIDR, `0.0.0.0/0` is any.
# ca_name: An issuing CA name
# ca_sha: An issuing CA shasum
outbound:
# Allow all outbound traffic from this node
- port: any
proto: any
host: any
inbound:
# Allow icmp between any nebula hosts
- port: any
proto: icmp
host: any
- port: 3389
proto: tcp
group: chris
Let me know if you want more of anything.
from nebula.
I have same issues during setup stage and after I change the config.yml in two settings, It works now.
1 "punch_back" should be true.
2 in the peer host section, lighthouse.host should be lighthouse address. and for lighthouse , I leave it blank.
hope helps.
from nebula.
@agreyfox I've done the same thing.
I think this issue is related to specific network environment, If I connect my office laptop to another network(mobile hotspot for example), all devices can be connected.
from nebula.
My setup is already like how @agreyfox recommended, it does not work. I was hoping that whole NAT cracking was going to work well with Nebula :(
It works only if I also enable WIreguard :) Then well that is no help to me since Wireguard is in place.
from nebula.
oh, another things is if your host behind the firewall, you may should enable the UDP port in the firewall and do the port mapping stuff.
from nebula.
I already forwarded the default ports UDP and TCP.
from nebula.
I see thatοΌ Maybe, just put lighthouse config.yml?
from nebula.
I have similar config except:
inbound:
- port: any
proto: any
host: any
another things is if your host behind the firewall, you may should enable the UDP port in the firewall and do the port mapping stuff.
In the working case(on laptop in home, one cloud server without public ip address or elastic ip bind to it), I need to enable the UDP port in the firewall, but I didn't do port mapping.
from nebula.
Sure
cat config.yml |sed '/#/d'|sed -r '/^\s*$/d' pki: ca: ./ca.crt cert: ./ROOT.crt key: ./ROOT.key static_host_map: "10.1.0.1": ["IP:4545"] lighthouse: am_lighthouse: true interval: 60 hosts: listen: host: 0.0.0.0 port: 4545 punchy: true punch_back: true tun: dev: nebula1 drop_local_broadcast: false drop_multicast: false tx_queue: 500 mtu: 1300 routes: logging: level: info format: text firewall: conntrack: tcp_timeout: 120h udp_timeout: 3m default_timeout: 10m max_connections: 100000 outbound: - port: any proto: any host: any inbound: - port: any proto: icmp host: any - port: 443 proto: tcp groups: - laptop - home
Sorry, later reply.
I look your lighthouse config file. It seemed ok, but I suspect two thing,
- for static_host_map: "10.1.0.1": ["IP:4545"], where I put my lighthouse ip address here,
- for lighthouse, It should has public IP address and don't need punch_back set to true.
e_raeb
from nebula.
-
IP is the reachable IP of the lighthouse.
-
IP is the public IP adress. Do you mean "hosts" or some other area?
Here is the section from a client
static_host_map:
"10.1.0.1": ["IP:4545"]
How do we define this part in the lighthouse itself?
from nebula.
I believed here we need put public IP address. other node should be same as lighthouse setting.
from nebula.
Then I am doing it right, IP is the public IP of the LH. Curious situation.
from nebula.
I've run into the same issue:
home <---> lighthouse <---> client, but home <-/-> client.
Home is behind my consumer router, and lighthouse and client are in a VPS datacenter.
All three nodes are configured with
punchy: true
punch_back: true
firewall:
outbound:
- port: any
proto: any
host: any
inbound:
- port: any
proto: any
host: any
from nebula.
I just tried this again with v1.1 and no luck. It is interesting that pinging from the client outside f the local network creates activity on the pinged nebula client but the pingsa re all unsuccessful.
It would be nice if devs can comment on it since it seems to be a common thing given the responses.
(response on 10.1.0.11, pinged from 10.1.0.16)
INFO[11241] Handshake message sent handshake="map[stage:1 style:ix_psk0]" initiatorIndex=558803177 remoteIndex=0 udpAddr="XXX.254.XXX.XXX:55128" vp nIp=10.1.0.16
from nebula.
I have the same problems. When I change my network from WIFI to mobile hotspot. It works for the ping program.
from nebula.
Same problems here. Nodes in different networks can successfully ping the lighthouse (in another separate network) but can't ping each other. However, pinging does result in activity in the pinged nebula (as mentioned by @gerroon above)
from nebula.
Same here. I left a message on the support channel on Slack, but no response. (as mentioned by @NDolensek and @gerroon).
I'm happy to try to test or troubleshoot. Just let me know if there's anything I can do.
from nebula.
Relaying is also something affecting me at the moment (or more lack of) on Nebula 1.3.0 (Latest as of writing). I also have two nodes behind NATs that can communicate with the lighthouse (by-directionally) but can't communicate with the other members of the cluster. For now I'll be using Tinc/Wireguard again but I'm happy to try Nebula once this is implemented!
The use case primarily is LAN gaming for Diablo II (It uses TCP/IP so L3 networking should be fine via Nebula), I'm not sure how many people are attempting to use Nebula for gaming, but putting this in here haha.
For completeness, here is my lighthouse network config (running on FreeBSD 12.2), the lighthouses use the same config except that they are set to not be lighthouses and have a hosts: entry that points to the 192.168.14.1
key from the static host map - punch
and response
(Which I'm guessing this is what people used to call punch_back
?) are both true. I believe that's what was suppose to help with this dual NAT situation (or more complex NAT setups):
pki:
ca: /usr/local/etc/nebula/cactus.crt
cert: /usr/local/etc/nebula/octopus.crt
key: /usr/local/etc/nebula/octopus.key
static_host_map:
"192.168.14.1": ["myhost.org:4242"]
lighthouse:
am_lighthouse: true
interval: 60
listen:
host: 0.0.0.0
port: 4242
punchy:
punch: true
respond: true
delay: 1s
cipher: chachapoly
local_range: "192.168.14.0/24"
tun:
disabled: false
dev: tun1
drop_local_broadcast: false
drop_multicast: false
tx_queue: 500
mtu: 1300
logging:
level: info
format: text
firewall:
conntrack:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000
outbound:
- port: any
proto: any
host: any
inbound:
# Allow anyone to send ping requests (Useful to see if two people can communicate with each other)
- port: any
proto: icmp
host: any
# Diablo II: TCP/4000 (Permits Cactus Group Only)
- port: 4000
proto: tcp
groups:
- cactus
# Diablo II: TCP/6112 (Permits Cactus Group Only)
- port: 6112
proto: tcp
groups:
- cactus
The set up is basically:
Home Network
[Nebula LightHouse - 192.168.14.1] [Gaming Laptop on Windows - 192.168.14.2]
External Network
[Friend on Windows - 192.168.14.3]
If the behavior described here is the current behavior, this may be an instance of this comment: #71 (comment), if this is the case, I understand the point of maybe getting a digital ocean droplet or something in order to have the IPs registered properly when they go through the lighthouse, but I don't think this is really a viable option in a lot of cases given that a lot of people do have their own home servers that they would prefer to re-use in order to save money and to retain control over the infrastructure.
from nebula.
we spec'd out the way we are going to do relays, at the protocol level. While this wasn't a design goal of Nebula, there is enough need in the community that it is worth doing.
Hi @rawdigits is there anything I can find on those specs? I think relay is a pretty important feature, since not all NAT's are punchable.
from nebula.
had the same error. those options didn't do the trick for me:
```
local_range: "192.168.14.0/24"
punchy:
punch: true
respond: true
delay: 1s
```
what did it was giving a client a static ip and settings these things up in the peer's configuration:
```
[...]
static_host_map:
# lighthouse with public ip
"192.168.100.1": [ "1.2.3.4:4242" ]
# local network behind nat
"192.168.100.101": [ "192.168.2.2:4242" ]
[...]
```
where `192.168.100.xxx` is the nebula network and `192.168.2.xxx` is the network behind the NAT.
after that the two clients behind the same NAT could talk to each other.
hope this helps in some ways.
---
never mind my comment. tested that behind two different NAT and it doesn't work. sorry for the noise.
from nebula.
Nebula 1.6.0 is released with a Relay feature, to cover cases like a Symmetric NAT.
#678
Check out the example config to see how to configure a Nebula node to act as a relay, and how to configure other nodes to identify which Relay can be used by peers for access.
Take a look at #33 (comment) for more info on how to configure it.
from nebula.
I'm closing this out for inactivity. If you continue to have issues after trying the relay feature, pleasefeel free to open up a new issue or join us on Slack. Thanks!
from nebula.
Related Issues (20)
- Thanks for nebula
- example config: commented punchy.respond value should be false HOT 1
- π BUG: tests fail after 2027-11-11 HOT 1
- π BUG: Unable to reconnect after server crash HOT 4
- π BUG: overall poor behavior with "not before" field in host certificate HOT 5
- Feature request: push unsafe routes from lighthouse HOT 1
- π BUG:Failed to setup adapter (problem code: 0x34) HOT 21
- Feature Request: Relative paths in config HOT 1
- Feature Request: `nebula-service -test -config` should warn about unknown keys and stuff in config yaml
- π BUG: wintun failed HOT 6
- π BUG: Event Log spam when handshake timeout fails HOT 10
- π BUG: "Refusing to handshake with myself" when configuring self as unsafe_routes via
- Windows is not as fast as linux for downloading files
- π BUG: Nebula nodes cannot ping each other , however they can ping the lighthouse vpn IP HOT 1
- π BUG: Linux (386) "panic: runtime error: makeslice: len out of range" HOT 2
- π BUG:test
- can i use port range ?
- π BUG: use_system_route_table not considering multipath routes HOT 1
- π BUG: wakes up the CPU a lot
- π BUG: after dns changed, connection lost forever
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nebula.