Giter Club home page Giter Club logo

wiretap's Issues

About Nested Tunnels

Regarding Nested Tunnels, I still don't quite understand how to add configuration and Endpoint settings to the client's conf file. Can you explain in detail?
My operation is to generate a config that does not contain Endpoint, and then specify the --endpoint on hop2 as the ip of hop1
Then I don't know how to operate

Set Rlimit_NOFILE

default soft limit for ulimit -n is 1024. Would be nice if wiretap sets this to the max allowed. Otherwise connections start dropping if more than 1k are open.

current work around is ulimit -n $(ulimit -Hn) before starting wiretap.

Simple connect without RELAY + E2EE?

Hello,

thanks for your great work. I noticed that the API changed and now it requires two tunnels (Relay + E2EE).

Is there any way to use the new wiretap in a mode how it used to be? A single tunnel between Client <-> Server without Relay+E2EE?

(I'm operating wiretap in a setting where I can only supply a single private + public key on the client (through an RPC interface) and there is no way to add two private and two public keys to achieve a E2EE tunnel inside a Relay tunnel. The last working wiretap that supports simple P2P tunnels seems to be v0.2.1)

Using the client as a "proxy" server

So, how can I use the client as a "proxy" server to run tools locally according to this scheme: NAT Machine > Wiretap Client on VPS > Wiretap Server?

Standalone mode?

Hello,

Does wiretap works as a https or socks proxy, without executing any commands on wireguard server?

So it would be a standalone mode that expose a port for proxy passing through wireguard server.

WTF? Routes have a default but is required

Error: required flag(s) "routes" not set
Usage:
  wiretap configure [flags]

Flags:
  -r, --routes strings             CIDR IP ranges that will be routed through wiretap (default [0.0.0.0/32])
  -e, --endpoint string            socket address of wireguard listener that server will connect to (example "1.2.3.4:51820")
      --outbound                   client will initiate handshake to server, set endpoint to server address
  -p, --port int                   port of local wireguard relay listener (default 51820)
      --relay-output string        wireguard relay config output filename (default "wiretap_relay.conf")
      --e2ee-output string         wireguard E2EE config output filename (default "wiretap.conf")
  -s, --server-output string       wiretap server config output filename (default "wiretap_server.conf")
  -c, --clipboard                  copy configuration args to clipboard
      --simple                     disable multihop and multiclient features for a simpler setup
  -0, --api string                 address of server API service (default "::2/128")
      --ipv4-relay string          ipv4 relay address (default "172.16.0.1/32")
      --ipv6-relay string          ipv6 relay address (default "fd:16::1/128")
      --ipv4-e2ee string           ipv4 e2ee address (default "172.19.0.1/32")
      --ipv6-e2ee string           ipv6 e2ee address (default "fd:19::1/128")
      --ipv4-relay-server string   ipv4 relay address of server (default "172.17.0.2/32")
      --ipv6-relay-server string   ipv6 relay address of server (default "fd:17::2/128")
  -k, --keepalive int              tunnel keepalive in seconds, only applies to outbound handshakes (default 25)
  -m, --mtu int                    tunnel MTU (default 1420)
      --disable-ipv6               disables IPv6
  -h, --help                       help for configure

Global Flags:
      --show-hidden   show hidden flag options

required flag(s) "routes" not set

Flag --routes have a default [0.0.0.0/32] but it is required, how to use default then?

SYN-SENT userland timeout

I'm using https://thc.org/segfault/wireguard with the wiretap v0.3.0 (--simple branch) with WIRETAP_SIMPLE=true ./wiretap_linux_amd64 serve --ipv4-relay 192.168.0.1 --ipv6-relay fd::1 --allowed 192.168.0.1/28,fd::1/125

The Exit Node is a Linux x86_64 running wiretap.
The origin host runs network scans using nmap or masscan.

The userland wiretap process tcp connect keeps the connection for a very long time in state SYN-SENT.
This can cause port exhaustion on the exit node (wiretap) when scanning a non-existing host on the Internet:

┌──(EXIT:Dirt)(root💀sf-BiologyMetal)-[~]
└─# nmap -n -Pn -sS -p- --open 30.31.32.33

and even worse if masscan is used:

┌──(EXIT:Dirt)(root💀sf-BiologyMetal)-[~]
└─# masscan --interface wgExit -p- --range 30.31.32.0/24  --source-ip 192.168.0.3 --banners --rate 1000

It would be desirable for wiretap to close the connection between wiretap and the target (and not forward a RST/FIN to upstream) after 10 seconds. The assumption is that if the target hasn't responded with a SYN-ACK within 10 seconds then the wiretap can 'drop' the connection.

The origin-server may still re-transmit the SYN again and again (in case it's not a masscan/nmap-scan) but in that case and with the original 'forwarding connection' having been closed already, wiretap would detect the re-transmitted SYN as a first SYN from Origin-server and just call Connect() to the target again.....

Multiple clients in "outbound" mode

Here is my setup:

  • A client: a mobile phone fully under NAT
  • A server: a container, under NAT as well, but with several ports exposed to the internet.

I was able to create a tunnel between them using the following commands:

# Generate server and client config
./wiretap configure --routes 0.0.0.0/0 --outbound --endpoint SERVER_IP:SERVER_PORT --simple

# Run server inside container
./wiretap serve -f wiretap_server.conf --simple --port SERVER_PORT

It works fine, but now I'm not sure how to add more clients to the same server: the config seems to accept only a single peer. Is it possible to have more clients in such setup? If not, is it possible to be implemented?

Feature request: Retrieve stats from WT-EXIT

A REST-API call from the origin host to the WT-EXIT to retrieve information/stats about the WT-EXIT.

curl http://172.16.0.1/info

Info/Stats may include

  1. Configured IP ranges of all interfaces on the WT-EXIT
  2. Uptime / Load / Users

Not closing connection on RST

I'm sorry for not investigation this further and please accept the 'observation' rather than a bug report.

I'm using https://thc.org/segfault/wireguard with the wiretap v0.3.0 (--simple branch) with WIRETAP_SIMPLE=true ./wiretap_linux_amd64 serve --ipv4-relay 192.168.0.1 --ipv6-relay fd::1 --allowed 192.168.0.1/28,fd::1/125

The Exit Node is a Linux x86_64 running wiretap.
The origin host runs nmap -n -Pn -sT -p1-512 --open scanme.nmap.org

Issue: The scan never finishes.

It appears that the wiretap keeps the connection open even when the Origin-Server sent a RST.

On the Exit node:

root@gs6:~# tcpdump -n -i ens3 host 45.33.32.156 and port 80
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), snapshot length 262144 bytes
16:42:56.401101 IP 51.83.131.42.55984 > 45.33.32.156.80: Flags [S], seq 2942595855, win 1460, options [mss 1460,sackOK,TS val 766681198 ecr 0,nop,wscale 2], length 0
16:42:56.579900 IP 45.33.32.156.80 > 51.83.131.42.55984: Flags [S.], seq 1777498995, ack 2942595856, win 65160, options [mss 1460,sackOK,TS val 2105695364 ecr 766681198,nop,wscale 7], length 0
16:42:56.579928 IP 51.83.131.42.55984 > 45.33.32.156.80: Flags [.], ack 1, win 365, options [nop,nop,TS val 766681377 ecr 2105695364], length 0

On the origin (where nmap is running)

┌──(EXIT:Dirt)(root💀sf-BiologyMetal)-[~]
└─# tcpdump -n  -i wgExit port 80
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wgExit, link-type RAW (Raw IP), snapshot length 262144 bytes


16:42:56.389574 IP 192.168.0.2.40372 > 45.33.32.156.80: Flags [S], seq 1050340797, win 64860, options [mss 1380,sackOK,TS val 2860797468 ecr 0,nop,wscale 7], length 0
16:42:56.591281 IP 45.33.32.156.80 > 192.168.0.2.40372: Flags [S.], seq 4170675645, ack 1050340798, win 27584, options [mss 1380,sackOK,TS val 3287297152 ecr 2860797468,nop,wscale 5], length 0
16:42:56.591319 IP 192.168.0.2.40372 > 45.33.32.156.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 2860797670 ecr 3287297152], length 0
16:42:56.591406 IP 192.168.0.2.40372 > 45.33.32.156.80: Flags [R.], seq 1, ack 1, win 507, options [nop,nop,TS val 2860797670 ecr 3287297152], length 0

On the Exit Node the connection still shows ESTAB:

root@gs6:~# ss -antp | grep -F 45.33.32.156
ESTAB  0      0       51.83.131.42:55984   45.33.32.156:80    users:(("wiretap_linux_a",pid=125573,fd=11))

My gut feeling is that wiretap only processes for a 'clean close' (FIN) but ignores the RST.

WT & masscan. Only the first open port is reported

This problem does never show up if I'm using WG on both sides. It shows up when I'm using WT on the EXIT-NODE
The bug is re-produceable (i tested 20x the same scenario and it showed up reliably).

I'm testing the current github from branch tcp-fix. I'm scanning for 2 ports on a single target only. Both ports are open on the target. Only the first one is found.

On the origin server (running wireguard)

masscan --interface wgExit --rate 1 -p31337,22 --open-only 45.33.32.156

Packets on origin server (wireguard):

13:09:42.916290 IP 172.16.0.2.49782 > 45.33.32.156.31337: Flags [S], seq 699471134, win 1024, length 0
13:09:43.073437 IP 45.33.32.156.31337 > 172.16.0.2.49782: Flags [S.], seq 3059500064, ack 699471135, win 27584, options [mss 1380], length 0
13:09:43.073500 IP 172.16.0.2.49782 > 45.33.32.156.31337: Flags [R], seq 699471135, win 0, length 0
13:09:43.916450 IP 172.16.0.2.49782 > 45.33.32.156.31337: Flags [R], seq 699471135, win 1200, length 0

13:09:44.916482 IP 172.16.0.2.49782 > 45.33.32.156.22: Flags [S], seq 2369022976, win 1024, length 0

Observe that the TCP SYN-ACK from port 22 is never send to the origin server. We only the the outgoing SYN.

Packet on Exit-Node to target (running wiretap):

15:09:42.918608 IP 192.145.44.201.49070 > 45.33.32.156.31337: Flags [S], seq 299369726, win 16060, length 0
15:09:43.070073 IP 45.33.32.156.31337 > 192.145.44.201.49070: Flags [S.], seq 619693497, ack 299369727, win 65160,  length 0
15:09:43.070131 IP 192.145.44.201.49070 > 45.33.32.156.31337: Flags [.], ack 1, win 8, length 0
15:09:43.222053 IP 45.33.32.156.31337 > 192.145.44.201.49070: Flags [F.], seq 1, ack 1, win 510, options [nop,nop,TS val 2032056807 ecr 1905127897], length 0
15:09:43.224657 IP 192.145.44.201.49070 > 45.33.32.156.31337: Flags [.], ack 2, win 8, length 0

15:09:44.918731 IP 192.145.44.201.36134 > 45.33.32.156.22: Flags [S], seq 292390217, win 16060, length 0
15:09:45.070204 IP 45.33.32.156.22 > 192.145.44.201.36134: Flags [S.], seq 3384812886, ack 292390218, win 65160, length 0
15:09:45.070277 IP 192.145.44.201.36134 > 45.33.32.156.22: Flags [.], ack 1, win 8, length 0
15:09:45.227020 IP 45.33.32.156.22 > 192.145.44.201.36134: Flags [P.], seq 1:45, ack 1, win 510, length 44: SSH: SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.13
15:09:45.227053 IP 192.145.44.201.36134 > 45.33.32.156.22: Flags [.], ack 45, win 8, length 0

15:09:48.071226 IP 192.145.44.201.49070 > 45.33.32.156.31337: Flags [F.], seq 1, ack 2, win 8, length 0
15:09:48.223085 IP 45.33.32.156.31337 > 192.145.44.201.49070: Flags [.], ack 2, win 510, length 0
15:09:50.070754 IP 192.145.44.201.36134 > 45.33.32.156.22: Flags [R.], seq 1, ack 45, win 8, length 0

The socket information on the exit node shows that the data is in the recv-q (44 bytes, "OpenSSH_..."):

tcp       44      0 192.145.44.201:36134    45.33.32.156:22        

The data stays in the RECV-Q for 5 seconds before the socket is closed.

The expected behaviour is for WT to:

  1. Send SYN/ACK to origin-host when TCP to target completes
  2. Send data received from target to origin-host

My gut feeling is that WT misses that the TCP connection has been established successfully and then kills it after 5 seconds.

It is odd because it works when I'm using nmap instead of masscan:

 nmap -p31337,22 -Pn -sS 45.33.32.156

The only difference that I can see is that nmap uses the Linux kernel (rather than raw TCP) and thus the SYN is re-send 2 times:

# FIRST SYN to port 22
13:16:49.292759 IP 172.16.0.2.42152 > 45.33.32.156.22: Flags [S], seq 2471212292, win 1024, options [mss 1460], length 0

13:16:49.292816 IP 172.16.0.2.42152 > 45.33.32.156.31337: Flags [S], seq 2471212292, win 1024, options [mss 1460], length 0
13:16:49.451170 IP 45.33.32.156.31337 > 172.16.0.2.42152: Flags [S.], seq 2053963635, ack 2471212293, win 27584, options [mss 1380], length 0
13:16:49.451203 IP 172.16.0.2.42152 > 45.33.32.156.31337: Flags [R], seq 2471212293, win 0, length 0

# SECOND SYN to port 22. Why is this needed? 
13:16:51.092230 IP 172.16.0.2.42154 > 45.33.32.156.22: Flags [S], seq 2471081222, win 1024, options [mss 1460], length 0

13:16:51.250134 IP 45.33.32.156.22 > 172.16.0.2.42154: Flags [S.], seq 2537989258, ack 2471081223, win 27584, options [mss 1380], length 0
13:16:51.250179 IP 172.16.0.2.42154 > 45.33.32.156.22: Flags [R], seq 2471081223, win 0, length 0

My gut says that WT somehow needs this 2nd SYN?

TCP Keepalive not forwarded to upstream origin server.

It appears that wiretap does not send empty PSH packets (tcp keepalive) to check if the Origin-host still has the port open.

I'm using https://thc.org/segfault/wireguard with the wiretap v0.3.0 (--simple branch) with WIRETAP_SIMPLE=true ./wiretap_linux_amd64 serve --ipv4-relay 192.168.0.1 --ipv6-relay fd::1 --allowed 192.168.0.1/28,fd::1/125

The Exit Node is a Linux x86_64 running wiretap (WT-EXIT).
The origin host runs nmap -n -Pn -sT -p1-512 --open scanme.nmap.org

The Origin host sends a SYN to a host on the Internet that does not exist:

┌──(EXIT:Dirt)(root💀sf-BiologyMetal)-[~]
└─# nmap -n -Pn -sT -T5  -p80 --open 30.31.32.33

Observer on the Exit Node the state SYN-SENT for a long time even that the app (nmap) has long exited:

SYN-SENT 0      1       51.83.131.42:53150    30.31.32.33:80    users:(("wiretap_linux_a",pid=125616,fd=14))

The same bug (eg. lag of empty PSH for TCP KeepAlive between origin-host and WT-Exit) can be observed when starting an app on the upstream origin-server and not sending a FIN when killing the app (or that FIN drops on the wireguard/udp-leg):

In this case wiretap keeps the connection from the Exit Node to the target in ESTB "forever" even that the upstream app on the origin-server no longer has this connection open.

There are TCP Keepalive messages between wiretap and the target but there are no TCP keepalive between the wiretap and the upstream origin server.

A proposed solution would be to send an empty PSH (e.g. TCP KeepAlive) every 60 seconds from WT-EXIT to the upstream origin host and watch out for any returning RST or ICMP Dest unreachable.

Feature Request: Reverse Port Forwarding

A method to tunnel back ports from the WT-EXIT back to the origin host (via REST api?).

To open a port forward from the WT-EXIT make a call from the origin-host to the WT-EXIT list so:

curl -s http://172.16.0.1/fwd -dport=31337

Thus would forward any new TCP connection to the EXIT-IP of WT-EXIT back to 172.16.0.1 (the origin host). Other options:

-dproto=<udp/tcp>          -- to select protocol. Default is TCP.
-daction=<del/delall/list> -- Delete one specific or all port forwards or list all forwards
-ddst=<IP:port>            -- Destination ip/port (default to 172.16.0.2:port)

(could also allow -dport=<unix socket file> like SSH does - if anyone ever uses that).

Error after a successful DNS query terminates server process

I'm getting an error after DNS requests that causes the server process to panic and terminate. My current workaround is to wrap the server in a while loop, but that's not very elegant.

Roughly the same stack trace when the server is on linux or windows:
wiretap[.exe] : panic: unknown network protocol number 0
wiretap/transport/udp.getDataFromPacket()
wiretap/transport/udp/udp.go:128

docker test setup / Invalid MAC of handshake

I was trying out the docker test setup but can't make it work. The handshake fails due to invalid MAC. Also it seems no wireguard interfaces get created on server side - I'm unsure if that is expected, but probably not.
EDIT: no, this is expected bc it shall run unprivileged and thus doesn't use real interfaces.

wiretap version: 67e3e20
kernel version: 6.0.6

host:

$ sudo docker compose up --build
# enable wireguard logging
$ echo 'module wireguard +p' | sudo tee /sys/kernel/debug/dynamic_debug/control

client:

root@5b7aba15837f:/wiretap# ./wiretap configure -e 10.1.0.2:51821 -r 10.2.0.0/16,fd:2::/64

Configurations successfully generated.
Import the config(s) into WireGuard locally and pass the arguments below to Wiretap on the remote machine.

config: wiretap_relay.conf
────────────────────────────────
[Interface]
PrivateKey = QDDwmCLVFlLRI6maCkhuch37rM7pf5iMjDYYDj/+40k=
Address = 172.16.0.1/32
Address = fd:16::1/128
ListenPort = 51820

[Peer]
PublicKey = 8AIvsGagQCUWENFNVefPbi8bmzGDkLz5ukTZmrl3S2Q=
AllowedIPs = 172.17.0.0/24,fd:17::/48
────────────────────────────────

config: wiretap.conf
────────────────────────────────
[Interface]
PrivateKey = KNLkwlr9SqbDEfyJetZ5zEeNoi6y54hvjYujGSxy81Y=
Address = 172.19.0.1/32
Address = fd:19::1/128
ListenPort = 51821
MTU = 1340

[Peer]
PublicKey = 03kzMw5XaIZstJIAwFuRtmukqraa2eNAUOWpKWv1mTA=
AllowedIPs = 10.2.0.0/16,fd:2::/64,::2/128
Endpoint = 172.17.0.2:51821
────────────────────────────────

server config: wiretap_server.conf

server command:
POSIX Shell:  WIRETAP_RELAY_INTERFACE_PRIVATEKEY=sM4xXCWOttLPsTFmEtCwkUCQFH0B0MSd0CaitMGGWG4= WIRETAP_RELAY_PEER_PUBLICKEY=WgWBQT1h/drdEzspcbkAVdK+zw/ff553Z7lBwkYa/gg= WIRETAP_RELAY_PEER_ENDPOINT=10.1.0.2:51821 WIRETAP_E2EE_INTERFACE_PRIVATEKEY=SF6F0rHVA+WvgD0/q21/2M5R4aY6Plvf63IWyssYp2s= WIRETAP_E2EE_PEER_PUBLICKEY=kF+PrySBovhd2eChKWCCjkrqID4cvrBkX0k0PBZ7ExE= WIRETAP_E2EE_PEER_ENDPOINT=172.16.0.1:51821 ./wiretap serve
 PowerShell:  $env:WIRETAP_RELAY_INTERFACE_PRIVATEKEY="sM4xXCWOttLPsTFmEtCwkUCQFH0B0MSd0CaitMGGWG4="; $env:WIRETAP_RELAY_PEER_PUBLICKEY="WgWBQT1h/drdEzspcbkAVdK+zw/ff553Z7lBwkYa/gg="; $env:WIRETAP_RELAY_PEER_ENDPOINT="10.1.0.2:51821"; $env:WIRETAP_E2EE_INTERFACE_PRIVATEKEY="SF6F0rHVA+WvgD0/q21/2M5R4aY6Plvf63IWyssYp2s="; $env:WIRETAP_E2EE_PEER_PUBLICKEY="kF+PrySBovhd2eChKWCCjkrqID4cvrBkX0k0PBZ7ExE="; $env:WIRETAP_E2EE_PEER_ENDPOINT="172.16.0.1:51821"; .\wiretap.exe serve
Config File:  ./wiretap serve -f wiretap_server.conf

root@5b7aba15837f:/wiretap# wg-quick up ./wiretap.conf ; wg-quick up ./wiretap_relay.conf
[#] ip link add wiretap type wireguard
[#] wg setconf wiretap /dev/fd/63
[#] ip -4 address add 172.19.0.1/32 dev wiretap
[#] ip -6 address add fd:19::1/128 dev wiretap
[#] ip link set mtu 1340 up dev wiretap
[#] ip -6 route add ::2/128 dev wiretap
[#] ip -6 route add fd:2::/64 dev wiretap
[#] ip -4 route add 10.2.0.0/16 dev wiretap
[#] ip link add wiretap_relay type wireguard
[#] wg setconf wiretap_relay /dev/fd/63
[#] ip -4 address add 172.16.0.1/32 dev wiretap_relay
[#] ip -6 address add fd:16::1/128 dev wiretap_relay
[#] ip link set mtu 1420 up dev wiretap_relay
[#] ip -6 route add fd:17::/48 dev wiretap_relay
[#] ip -4 route add 172.17.0.0/24 dev wiretap_relay

server:

root@f42df23e78e8:/wiretap# WIRETAP_RELAY_INTERFACE_PRIVATEKEY=sM4xXCWOttLPsTFmEtCwkUCQFH0B0MSd0CaitMGGWG4= WIRETAP_RELAY_PEER_PUBLICKEY=WgWBQT1h/drdEzspcbkAVdK+zw/ff553Z7lBwkYa/gg= WIRETAP_RELAY_PEER_ENDPOINT=10.1.0.2:51821 WIRETAP_E2EE_INTERFACE_PRIVATEKEY=SF6F0rHVA+WvgD0/q21/2M5R4aY6Plvf63IWyssYp2s= WIRETAP_E2EE_PEER_PUBLICKEY=kF+PrySBovhd2eChKWCCjkrqID4cvrBkX0k0PBZ7ExE= WIRETAP_E2EE_PEER_ENDPOINT=172.16.0.1:51821 ./wiretap serve -d

Relay configuration:
────────────────────────────────
[Peer]
PublicKey = 8AIvsGagQCUWENFNVefPbi8bmzGDkLz5ukTZmrl3S2Q=
AllowedIPs = 0.0.0.0/32
────────────────────────────────

E2EE configuration:
────────────────────────────────
[Peer]
PublicKey = 03kzMw5XaIZstJIAwFuRtmukqraa2eNAUOWpKWv1mTA=
AllowedIPs = 0.0.0.0/32
────────────────────────────────

private_key=b0ce315c258eb6d2cfb1316612d0b0914090147d01d0c49dd026a2b4c186586e
listen_port=51820
public_key=5a0581413d61fddadd133b2971b90055d2becf0fdf7f9e7767b941c2461afe08
endpoint=10.1.0.2:51821
allowed_ip=172.16.0.1/32
allowed_ip=fd:16::1/128
persistent_keepalive_interval=25

DEBUG: 2023/09/22 11:48:59 UAPI: Updating private key
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 2 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 1 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 2 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 3 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 4 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 3 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 4 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 2 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 5 - started
DEBUG: 2023/09/22 11:48:59 UAPI: Updating listen port
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 6 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 6 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 8 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 8 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 1 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 7 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 4 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 5 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 5 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 6 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 3 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 7 - started
DEBUG: 2023/09/22 11:48:59 Routine: event worker - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 1 - started
DEBUG: 2023/09/22 11:48:59 Interface up requested
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - UAPI: Created
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - UAPI: Updating endpoint
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 7 - started
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - UAPI: Adding allowedip
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - UAPI: Adding allowedip
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - UAPI: Updating persistent keepalive interval
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - Starting
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 8 - started
DEBUG: 2023/09/22 11:48:59 Routine: TUN reader - started
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - Routine: sequential sender - started
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - Sending keepalive packet
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - Sending handshake initiation
DEBUG: 2023/09/22 11:48:59 Routine: receive incoming v6 - started
DEBUG: 2023/09/22 11:48:59 UDP bind has been updated
DEBUG: 2023/09/22 11:48:59 peer(WgWB…a/gg) - Routine: sequential receiver - started
DEBUG: 2023/09/22 11:48:59 Routine: receive incoming v4 - started
DEBUG: 2023/09/22 11:48:59 Interface state was Down, requested Up, now Up
private_key=485e85d2b1d503e5af803d3fab6d7fd8ce51e1a63a3e5bdfeb7216cacb18a76b
listen_port=51821
public_key=905f8faf2481a2f85dd9e0a12960828e4aea203e1cbeb0645f49343c167b1311
endpoint=172.16.0.1:51821
allowed_ip=172.19.0.1/32
allowed_ip=fd:19::1/128
persistent_keepalive_interval=25

DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 4 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 8 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 6 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 7 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 2 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 4 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 2 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 7 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 5 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 1 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 5 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 1 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 7 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 1 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 6 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 2 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 3 - started
DEBUG: 2023/09/22 11:48:59 UAPI: Updating private key
DEBUG: 2023/09/22 11:48:59 UAPI: Updating listen port
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - UAPI: Created
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - UAPI: Updating endpoint
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - UAPI: Adding allowedip
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 5 - started
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - UAPI: Adding allowedip
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - UAPI: Updating persistent keepalive interval
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 8 - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 8 - started
DEBUG: 2023/09/22 11:48:59 Routine: handshake worker 3 - started
DEBUG: 2023/09/22 11:48:59 Routine: TUN reader - started
DEBUG: 2023/09/22 11:48:59 UDP bind has been updated
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - Starting
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - Sending keepalive packet
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 6 - started
DEBUG: 2023/09/22 11:48:59 Routine: encryption worker 4 - started
DEBUG: 2023/09/22 11:48:59 Routine: event worker - started
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - Routine: sequential receiver - started
DEBUG: 2023/09/22 11:48:59 Interface up requested
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - Sending handshake initiation
DEBUG: 2023/09/22 11:48:59 Routine: receive incoming makeReceive - started
DEBUG: 2023/09/22 11:48:59 peer(kF+P…7ExE) - Routine: sequential sender - started
DEBUG: 2023/09/22 11:48:59 Routine: decryption worker 3 - started
DEBUG: 2023/09/22 11:48:59 Routine: receive incoming makeReceive - started
DEBUG: 2023/09/22 11:48:59 Interface state was Down, requested Up, now Up
WIRETAP: 2023/09/22 11:48:59 API: API listener up
DEBUG: 2023/09/22 11:49:04 peer(kF+P…7ExE) - Handshake did not complete after 5 seconds, retrying (try 2)
DEBUG: 2023/09/22 11:49:04 peer(kF+P…7ExE) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:04 peer(WgWB…a/gg) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:09 peer(kF+P…7ExE) - Handshake did not complete after 5 seconds, retrying (try 3)
DEBUG: 2023/09/22 11:49:09 peer(kF+P…7ExE) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:09 peer(WgWB…a/gg) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:15 peer(WgWB…a/gg) - Handshake did not complete after 5 seconds, retrying (try 2)
DEBUG: 2023/09/22 11:49:15 peer(WgWB…a/gg) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:15 peer(kF+P…7ExE) - Handshake did not complete after 5 seconds, retrying (try 4)
DEBUG: 2023/09/22 11:49:15 peer(kF+P…7ExE) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:20 peer(kF+P…7ExE) - Handshake did not complete after 5 seconds, retrying (try 5)
DEBUG: 2023/09/22 11:49:20 peer(kF+P…7ExE) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:20 peer(WgWB…a/gg) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:25 peer(kF+P…7ExE) - Handshake did not complete after 5 seconds, retrying (try 6)
DEBUG: 2023/09/22 11:49:25 peer(kF+P…7ExE) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:25 peer(WgWB…a/gg) - Sending handshake initiation
DEBUG: 2023/09/22 11:49:30 peer(WgWB…a/gg) - Handshake did not complete after 5 seconds, retrying (try 2)
DEBUG: 2023/09/22 11:49:30 peer(WgWB…a/gg) - Sending handshake initiation

server in another shell:

root@6b6968ea03fe:/wiretap# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
213: eth0@if214: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:0a:01:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
217: eth1@if218: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:0a:02:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
root@6b6968ea03fe:/wiretap# wg

client:

root@5b7aba15837f:/wiretap# ./wiretap status
2023/09/22 12:03:04 failed to fetch node's configuration as peer: Get "http://::2:80/serverinfo?": dial tcp [::2]:80: i/o timeout
root@5b7aba15837f:/wiretap# wg show
interface: wiretap
  public key: kF+PrySBovhd2eChKWCCjkrqID4cvrBkX0k0PBZ7ExE=
  private key: (hidden)
  listening port: 51821

peer: 03kzMw5XaIZstJIAwFuRtmukqraa2eNAUOWpKWv1mTA=
  endpoint: 172.17.0.2:51821
  allowed ips: 10.2.0.0/16, fd:2::/64, ::2/128
  transfer: 0 B received, 3.18 KiB sent

interface: wiretap_relay
  public key: WgWBQT1h/drdEzspcbkAVdK+zw/ff553Z7lBwkYa/gg=
  private key: (hidden)
  listening port: 51820

peer: 8AIvsGagQCUWENFNVefPbi8bmzGDkLz5ukTZmrl3S2Q=
  allowed ips: 172.17.0.0/24, fd:17::/48

host:

sudo dmesg | grep wireguard
[4389322.734042] wireguard: wiretap: Invalid MAC of handshake, dropping packet from 10.1.0.3:51820
[4389327.760379] wireguard: wiretap: Invalid MAC of handshake, dropping packet from 10.1.0.3:51820
[4389332.914449] wireguard: wiretap: Invalid MAC of handshake, dropping packet from 10.1.0.3:51820

UPnP

Hello, I'm using this on a Windows server, I connect my XBOX to it via router with wireguard, i have a problem that some applications and voice chat don't work on xbox, i noticed in the router logs that xbox opens UPnP port, when I use xray-core with a simple client server wireguard configuration everything works, BUT in the case of xray-core xbox doesn't open the UPnP port

High Memory usage per TCP connection

Wiretap seems to use around 14MBytes of memory (rss) for each new tcp connection. That's without kernel memory and without TCP buffers (which reside inside the kernel; not userland).

The problem is that this causes wiretap to fail (and exit; or killed by the OOM).

The problem can be re-created when establishing 88k TCP connections when wiretap is running on a Linux system with 2GBytes of RAM (sending 88k TCP SYN).

Killed process 125774 (wiretap_linux_a) total-vm:2368460kB, anon-rss:1198968kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:3356kB oom_score_adj:0

It seems odd that the userland wiretap allocates 14MBytes of memory before the TCP connection has exchanged any data.

The desirable solution would be any of these two:

  1. Reduce the memory requirement on wiretap. Not much memory needs to be allocated until the SYN-ACK is received (e.g. wiretap's connect(2) completes).
  2. When memory allocation fails then make wiretap fail the connection (send RST/FIN upstream) instead of dying. RST/FIN either for the failed connection or start freeing connections that are outstanding (not completed) - starting with the oldest - top make memory available for the most recent connection.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.