reyk / relayd Goto Github PK
View Code? Open in Web Editor NEWOpenBSD relayd daemon -experimental
License: Other
OpenBSD relayd daemon -experimental
License: Other
When issuing the command "relayctl reload" on OpenBSD 5.7 with the below config it seems relayd "hangs" and hogs the CPU. At this stage it's no longer possible to connect.
I am not seeing anything in the debug log that stands out, any way I can pin down what is going on?
After a minute or so I am seeing:
PID USERNAME PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND
27955 _relayd 64 0 1544K 3732K onproc - 2:38 99.12% relayd
5881 _relayd 64 0 1456K 3508K onproc - 2:39 99.02% relayd
7728 _relayd 64 0 1456K 3520K onproc - 0:00 99.02% relayd
The processes need to be killed manually for relayd to start again.
table <wwwhosts> { <ips> }
table <redirecthost> { 127.0.0.1 }
relay www {
listen on egress port 80
forward to <redirecthost> port 80
}
http protocol httpsfilter {
match request header set "X-ClientIP" value "$REMOTE_ADDR"
match request header append "X-Forwarded-For" value "$REMOTE_ADDR"
match request header append "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT"
match request header set "Connection" value "close"
tls { no client-renegotiation, edh params 2048 }
}
relay wwwtls {
listen on egress port 443 tls
protocol httpsfilter
forward to <wwwhosts> port 80 mode roundrobin check http "/" code 200
}
Hello,
I set up relayd to do a redirection like this:
match request header "Host" value "www.bla.fr" <forward to host1>
[...]
forward to <host1> port 8080 check tcp
Is it also possible to redirect this host with a specific url :
match request header "Host" value "www.bla.fr" url "/bla" <forward to host1>
[...]
forward to <host1> port 8081 check tcp
Is this way of doing things functional?
If I do this, the first game doesn't work anymore.
How is it possible to make both work please?
Thank you
I can't have relayd relaying relaying an HTTP persistent connection (Connection: keep-alive) to two differents destination.
I'm using the following configuration:
table <srv1> { 127.0.0.1 }
table <srv2> { 127.0.0.1 }
http protocol "httpfilter" {
return error
match url "www.example.org/" forward to <srv1>
match url "www.example.org/api/" forward to <srv2>
}
relay www {
listen on 127.0.0.1 port 80
protocol httpfilter
forward to <srv1> port 8080 check tcp
forward to <srv2> port 8181 check tcp
}
Then I test with the following command:
curl http://www.example.org http://www.example.org/api/
And I have the following logs in relayd:
relay www, session 1 (1 active), 0, 127.0.0.1 -> 127.0.0.1:8080, done, GET GET
And in fact, I receive two response from the same destination. The first one (i.e. 127.0.0.1:8080)
Seems to be out of sync with upstream:
https://github.com/openbsd/src/commits/master/usr.sbin/relayd
I've been switching over to ECDSA TLS certificates as they need to be renewed, however it appears that relayd currently lacks support for this technology.
The reasons one might want to migrate to ECDSA certs are presented in this Cloudflare article
If it's not too much of a headache to code, please add support for ECDSA certs.
I made this procedure and relayd can't load any certificate
Create folder mkdir -p /etc/ssl/relayd
Create key and certificate openssl req -x509 -days 365 -newkey rsa:2048 -keyout /etc/ssl/relayd/ca.key -out /etc/ssl/relayd/ca.crt
with passphrase "secret".
Fix right permissions chown -R root:_relayd
and chmod -R 750
Put configuration in /etc/relayd.conf:
http protocol httpfilter {
# Return HTTP/HTML error pages to the client
return error
# Block disallowed sites
match request label "URL filtered!"
block request quick url "www.example.com/" value "*"
}
http protocol "http_tls" {
tls tlsv1
tls ca key "/etc/ssl/relayd/ca.key" password "secret"
tls ca cert "/etc/ssl/relayd/ca.crt"
}
relay httpproxy {
# Listen on localhost, accept diverted connections from pf(4)
listen on $relayd_addr port $relayd_port
protocol httpfilter
# Forward to the original target host
forward to destination
}
relay sslproxy {
listen on 127.0.0.1 port 8443 tls
protocol http_tls
transparent forward with tls to destination
}
relayd -vv -d -f /etc/relayd.conf
Did I something wrong?
Thanks about your time to read until here. Cheers!
Hi Reyk,
Could you please consider adding a redirect option to relayd?
Currently I am forwarding to httpd, which works fine, but would be great if this can be handled in relayd directly. For example http to https.
My current relayd,conf for port 80:
table <redirecthost> { 127.0.0.1 }
relay www {
listen on egress port 80
forward to <redirecthost> port 80
}
The only thing that I have in httpd.conf is:
server "<hostname>" {
listen on 127.0.0.1 port 80
block return 301 "https://$SERVER_NAME$DOCUMENT_URI"
}
Or is this already possible and I just didn't do my homework?
Hi,
I see that relayd will be removed from some important implementations, like pfsense. Is there work in process about this or it's dead end?
Best regards
I saw this question in another doc... "mixing HTTP and HTTPS forwards in one relay" and in the context of the other question it was impossible to understand the complete POV.
In my case I have two relays. One for HTTP and one for HTTPS (incoming/server)... but then the HTTP relay forwards to my static httpd server as HTTP and my appserver as HTTP. And in my HTTPS relay I forward my static requests 'with tls' and my appserver... as http (no tls).
The challenge seems to be that the https relay is always trying to connect to blast with tls.
One thing I have not figured out... what is the default "forward to" if my match does not match?
and the sample code had a
pass tagged ok forward to <www>
but when I tried it seemed to overwrite the forward to to blast... "last match wins" (if this is the case then the order of my match is wrong.)
pass tagged ok forward to <www>
pass tagged ok forward to <blast>
most of my relayd.conf.
table <www> { 127.0.0.1 }
table <blast> { 127.0.0.1 }
http protocol "http" {
match request path "/.well-known/acme-challenge/*" forward to <www>
match request url "my.appserver.com" forward to <blast>
match request header set "X-Forwarded-For" value "$REMOTE_ADDR"
match request header set "X-Forwarded-Port" value "$REMOTE_PORT"
}
http protocol "https" {
#tls ca file "/etc/ssl/cert.pem"
tls ciphers $list
tls keypair "my.appserver.com"
tls keypair "webserver.com"
return error
match request header "Path" value "/.well-known/acme-challenge/*" forward to <www>
match request header "Host" value "my.appserver.com" forward to <blast>
match request header set "X-Forwarded-For" value "$REMOTE_ADDR"
match request header set "X-Forwarded-Port" value "$REMOTE_PORT"
match response header set "Content-Security-Policy" value \
"default-src 'self'"
match response header set "Referrer-Policy" value "no-referrer"
match response header set "Strict-Transport-Security" value \
"max-age=15552000; includeSubDomains; preload"
match response header set "X-Content-Type-Options" value "nosniff"
match response header set "X-Frame-Options" value "SAMEORIGIN"
match response header set "X-XSS-Protection" value "1; mode=block"
match method POST tag ok
match method GET tag ok
match method HEAD tag ok
block
pass tagged ok
#forward to <www>
#pass tagged ok forward to <blast>
}
relay "https" {
listen on $ipv4 port https tls
protocol "https"
forward with tls to <www> port 8443 check https "/" code 200
forward to <blast> port 1900 check http "/" code 401
}
relay "http" {
listen on $ipv4 port http
protocol "http"
forward to <www> port 8080 check http "/" code 302
forward to <blast> port 1900 check http "/" code 401
}
I have a super simple (sanitised) relayd.conf
$ext_ip = 192.168.1.1
table <t-http> { 127.0.0.1 }
table <t-https> { 127.0.0.1 }
http protocol "p-https" {
tls session tickets
tls keypair domain.example
tls ca file "/etc/ssl/cert.pem"
http websockets
tcp { nodelay, sack, socket buffer 65536, backlog 100 }
return error
block
pass request path log "/http*" forward to <t-http>
pass request path log "/https*" forward to <t-https>
pass response
}
relay "tlsforward" {
listen on $ext_ip port 443 tls
protocol "p-https"
forward to <t-http> port 81
forward with tls to <t-https> port 82
}
The the problem is with the second-to-last line.
If I remove "with tls",
then requests to 82 are forwarded unencrypted, and curl test reports
curl: (52) Empty reply from server
.
However, if I keep "with tls", the requests to port 81 are going
encrypted, and are failing with the following message in relayd logs:
SSL routines:ST_CONNECT:tlsv1 alert protocol version
,
TLS handshake error: handshake failed:
.
There should not be any TLS handshakes at port 81, because the backend
at port 81 is http-only.
This issue was first discussed at openbsd-misc.
Anybody managed to get relayd with IPv6 and TLS going? I am not able to figure out what the certificate name should be for IPv6. Tried multiple formats and they all seem to fail.
Relayd TLS Inspection does not support SNI, apparently.
The question was identified on openbsd-misc, and no-one provided further advice about how it may be configured.
https://marc.info/?l=openbsd-misc&m=162161980321486&w=2
It appears that other tools can be used, but of course, the preference is to use built-in tools where possible. I'm afraid my programming skills are a bit weak, and thus cannot provide a diff for improving relayd. I was hoping that this would be a relatively easy update, or that I missed something in the documentation. Alternatively, if the update is infeasible, I propose a slight change to the documentation:
*** relayd.conf.8.orig Fri May 21 13:19:06 2021
--- relayd.conf.8 Fri May 21 13:23:09 2021
*** 500,506 ****
filter TLS connections as a man-in-the-middle. This combined
mode is also called "TLS inspection". The configuration requires
additional X.509 certificate settings; see the ca key description
! in the PROTOCOLS section for more details.
When configured for "TLS inspection" mode, relayd(8) will listen for
incoming connections which have been diverted to the local socket by PF.
--- 500,510 ----
filter TLS connections as a man-in-the-middle. This combined
mode is also called "TLS inspection". The configuration requires
additional X.509 certificate settings; see the ca key description
! in the PROTOCOLS section for more details. Note that this feature
! currently does not support Server Name Identification (SNI) making
! it inappropriate for use as a general Internet TLS Inspection
! gateway.
!
When configured for "TLS inspection" mode, relayd(8) will listen for
incoming connections which have been diverted to the local socket by PF.
SNI support would be a fantastic addition, and maybe easier now that httpd has SNI enabled? This is preventing me from using relayd on a multi-site hosting solution.
I know this was on the roadmap a while back, when time was available, but thought I would re-ping to see where it is.
Thanks!
If a host is handling two ports (e.g. http and https), "relayctl host disable [ip address]" only disables the first instance, not the second. Workaround: use host ids instead.
Hi.
This is relayd on OpenBSD amd64/current.
I am trying to use relayd to implement a proxy-pass like functionality.
So far, no luck. I always end up on ...
table { 127.0.0.1 }
table { srv1.domain.tld }
http protocol vhost {
match url "srv2.domain.tld" tag srv2 forward to
match request header set "Host" value srv2.domain.tld tagged srv2
match url "srv1.domain.tld" tag srv1 forward to <srv1>
match request header set "Host" value srv1.domain.tld tagged srv1
}
relay vhost {
listen on 0.0.0.0 port 443 ssl
protocol vhost
forward to <localhost> port www
forward with ssl to <srv1> port https
}
Right now, syntax errors are just given a line followed by "syntax error," which makes debugging stuff pretty difficult. Would be nice if it also clarified where on the line the syntax parsing failed.
For example, accidental copying of a forward to
line from a redirect to a relay will cause a syntax error, because check
s are required in redirects but not in relays. Very easy to visually miss this, and simply saying "syntax error" doesn't really give enough information.
Hi,
I'm seeing over 2,200 CLOSE_WAIT connections on my server (a VPS running under KVM). After killing all the relayd processes and starting it, I'm seeing the number of CLOSE_WAIT connections slowly climbing again.
I'm running OpenBSD 5.6-stable with the latest 5.6-stable patch branch fetched and compiled today.
Here's some info:
$ uname -a
OpenBSD 5.6 GENERIC.MP#1 amd64
$ relayctl show summary
Id Type Name Avlblty Status
1 relay www active
1 table webhosts:80 active (1 hosts)
1 host 10.8.0.2 98.69% up
2 relay wwwssl active
2 table webhosts:80 active (1 hosts)
2 host 10.8.0.2 98.59% up
$ ps aux|grep relay
_relayd 16788 0.0 0.3 1624 3156 ?? S 3:02AM 0:15.65 relayd: pfe (relayd)
_relayd 12960 0.0 0.9 6620 9356 ?? S 3:02AM 0:19.04 relayd: relay (relayd)
_relayd 9157 0.0 0.3 1556 3188 ?? I 3:02AM 0:00.02 relayd: ca (relayd)
_relayd 26941 0.0 0.3 1560 3184 ?? I 3:02AM 0:00.03 relayd: ca (relayd)
_relayd 1633 0.0 0.3 1548 3164 ?? I 3:02AM 0:00.02 relayd: ca (relayd)
_relayd 10017 0.0 0.3 1548 3088 ?? I 3:02AM 0:00.01 relayd: ca (relayd)
_relayd 23462 0.0 0.3 1552 3152 ?? I 3:02AM 0:00.02 relayd: ca (relayd)
_relayd 21549 0.0 0.3 1268 2820 ?? S 3:02AM 0:30.23 relayd: hce (relayd)
_relayd 30665 0.0 0.9 6572 9384 ?? S 3:02AM 0:18.71 relayd: relay (relayd)
_relayd 5491 0.0 0.9 6676 9500 ?? S 3:02AM 0:18.86 relayd: relay (relayd)
_relayd 20565 0.0 0.9 6648 9452 ?? S 3:02AM 0:19.12 relayd: relay (relayd)
_relayd 29017 0.0 0.9 6664 9492 ?? S 3:02AM 0:19.40 relayd: relay (relayd)
$ tail /var/log/daemon
Mar 23 23:23:41 obsd relayd[20565]: relay www, session 1329 (501 active), 0,xxx.xxx.xxx.xxx -> 10.8.0.2:80, last write (done), GET
Mar 23 23:23:41 obsd relayd[20565]: relay www, session 1330 (501 active), 0, xxx.xxx.xxx.xxx -> 10.8.0.2:80, last write (done), GET
Mar 23 23:24:39 obsd relayd[12960]: relay www, session 1351 (501 active), 0,xxx.xxx.xxx.xxx -> :0, hard timeout
Mar 23 23:26:38 obsd relayd[5491]: relay www, session 1335 (501 active), 0,xxx.xxx.xxx.xxx -> :0, hard timeout
Mar 23 23:30:39 obsd relayd[30665]: relay www, session 1320 (501 active), 0,xxx.xxx.xxx.xxx -> :0, hard timeout
Mar 23 23:31:39 obsd relayd[29017]: relay www, session 1340 (501 active), 0,xxx.xxx.xxx.xxx -> :0, hard timeout
Mar 23 23:31:40 obsd relayd[29017]: relay www, session 1341 (501 active), 0, xxx.xxx.xxx.xxx -> 10.8.0.2:80, last write (done), GET
Mar 23 23:31:40 obsd relayd[29017]: relay www, session 1342 (501 active), 0,xxx.xxx.xxx.xxx -> 10.8.0.2:80, last write (done), GET
Mar 23 23:34:39 obsd relayd[20565]: relay www, session 1331 (501 active), 0,xxx.xxx.xxx.xxx -> :0, hard timeout
Mar 23 23:34:40 obsd relayd[12960]: relay www, session 1352 (501 active), 0,xxx.xxx.xxx.xxx -> :0, hard timeout
Mar 23 23:34:40 obsd relayd[20565]: relay www, session 1332 (501 active), 0,xxx.xxx.xxx.xxx -> 10.8.0.2:80, last write (done), GET
Mar 23 23:34:40 obsd relayd[12960]: relay www, session 1353 (501 active), 0,xxx.xxx.xxx.xxx -> 10.8.0.2:80, last write (done), GET
Mar 23 23:37:38 obsd relayd[5491]: relay www, session 1336 (501 active), 0,xxx.xxx.xxx.xxx -> :0, hard timeout
Mar 23 23:37:39 obsd relayd[5491]: relay www, session 1337 (501 active), 0,xxx.xxx.xxx.xxx -> 10.8.0.2:80, last write (done), GET
$ cat /etc/relayd.conf
#
# Relayd
#
interval 10
timeout 1000
prefork 5
log updates
ext_addr="xxx.xxx.xxx.xxx"
server1="10.8.0.2"
table <webhosts> { $server1 }
http protocol "www" {
match header append "X-Forwarded-For" value "$REMOTE_ADDR"
match header append "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT"
}
relay "www" {
listen on $ext_addr port http
protocol "www"
#forward to <webhosts> check http "/" code 200
forward to <webhosts> check tcp
}
http protocol "httpssl" {
match header append "X-Forwarded-For" value "$REMOTE_ADDR"
match header append "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT"
match header set "Connection" value "close"
# Various TCP performance options
tcp { nodelay, sack, socket buffer 65536, backlog 128 }
ssl { no sslv2, sslv3, tlsv1, ciphers HIGH }
ssl session cache disable
}
relay "wwwssl" {
# Provide SSL termination
listen on $ext_addr port 443 ssl
protocol "httpssl"
# Forward to hosts in the webhosts table
forward to <webhosts> port http check tcp
}
$ top
load averages: 0.80, 0.90, 0.58
50 processes: 49 idle, 1 on processor
CPU0 states: 0.0% user, 0.0% nice, 0.2% system, 0.0% interrupt, 99.8% idle
CPU1 states: 0.2% user, 0.0% nice, 0.0% system, 0.0% interrupt, 99.8% idle
Memory: Real: 70M/331M act/tot Free: 650M Cache: 196M Swap: 0K/1264M
Hi,
I just setup a small reverse proxy-like configuration for some of my python (werkzeug) instances.
# Macros
#
ext="0.0.0.0"
table <local1> { 127.0.0.1 }
table <local2> { 127.0.0.1 }
table <local3> { 127.0.0.1 }
table <local4> { 127.0.0.1 }
table <local5> { 127.0.0.1 }
http protocol httpPara {
# tcp {nodelay, sack, socket buffer 65535, backlog 128}
return error
match request path "/sfw/**" forward to <local1>
match request path "/nsfw/**" forward to <local2>
match request path "/kadsen/**" forward to <local3>
match request path "/pr0/**" forward to <local4>
match request path "/demo/**" forward to <local5>
pass
}
relay nichtparasoup {
listen on $ext port 80
protocol "httpPara"
forward to <local1> port 5000 check http '/sfw/status' code 200
forward to <local2> port 5002 check http '/nsfw/status' code 200
forward to <local3> port 5003 check http '/kadsen/status' code 200
forward to <local4> port 5004 check http '/pr0/status' code 200
forward to <local5> port 5006 check http '/demo/status' code 200
}
The problem is that relayd seems to cut HTML content away after X Bytes.
Not sure where this comes from. In the browser I see uninterpreted HTML that ends with this:
<dt>next image</dt>
<dd class="button">k</dd><dt>next-to-last image<
Where it should print:
<dt>next image</dt>
<dd class="button">k</dd><dt>next-to-last image</dt></dl><span class="button" id="keys">hot keys</span></footer></body>
</html>
The curling the instance directly (on the backend port, not using relayd) shows:
curl -v --silent http://foo.example.org:5003/kadsen/ >/dev/null
* Trying 11.11.11.11...
* Connected to foo.example.org (11.11.11.11) port 5003 (#0)
> GET /kadsen/ HTTP/1.1
> Host: foo.example.org:5003
> User-Agent: curl/7.47.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/html; charset=utf-8
< Content-Length: 57338
< Server: Werkzeug/0.11.10 Python/2.7.11
< Date: Thu, 18 Aug 2016 11:53:48 GMT
<
{ [14338 bytes data]
* Closing connection 0
And the same curl on relayd (notice the bytes transferred)
$ curl -v --silent http://foo.example.org/kadsen/ >/dev/null
* Trying 11.11.11.11...
* Connected to foo.example.org (11.11.11.11) port 80 (#0)
> GET /kadsen/ HTTP/1.1
> Host: foo.example.org
> User-Agent: curl/7.47.0
> Accept: */*
>
{ [40 bytes data]
* Connection #0 to host foo.example.org left intact
Anyways, I dont get delivered a working website using relayd and the http protocol. Not sure what to do now. I took a look at the tcp socket buffer configuration but that does not seem the right place, or?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.