kristapsdz / kcgi Goto Github PK
View Code? Open in Web Editor NEWminimal CGI and FastCGI library for C/C++
Home Page: https://kristaps.bsd.lv/kcgi
License: ISC License
minimal CGI and FastCGI library for C/C++
Home Page: https://kristaps.bsd.lv/kcgi
License: ISC License
I installed kcgi using pkg_add kcgi
Attempting to compile a test program with:
cc -static -g -w -Wall -o env.cgi env.c -lkcgi
I get a fatal error that kcgi.h cannot be found.
Introduced by 0a773fe as strtonum
handles base10 only.
This results in miscalculated digest if the nonce-count is bigger than 9, as kauth_count
defaults to zero when strtonum
fails.
I've begun a fastcgi project, but it appears that some features of khttp_parse, such as pages and validator keys, aren't supported in khttp_fcgi_parse. It would be great for fastcgi users to have those same features.
Thanks for this great project.
I would like to debug my kcgi application with a debugger like gdb
or lldb
. Since I'm not running the application directly but instead serving it with kfcgi
, I had a really hard time struggling to attach it to a debugger.
After some research, I discovered a feature named attach to local process supported by my IDE, CLion, and it seems promising. However, I still have no idea which local process to attach, because there are so many!
Do you think attaching to a local process is a feasible way of making a debugger to work with my kcgi application? If not, would remote debugging be a better approach?
For your reference, here is the script I'm using to automate deployment in my dev environment. Note that I'm using the variable mode (-r
) due to a macOS bug.
#!/usr/bin/env bash
mkdir -p /usr/local/var/www/fcgi-bin /usr/local/var/www/run
install -m 0555 cmake-build-debug/kernod /usr/local/var/www/fcgi-bin
echo $LOGIN_PASSWD | sudo -S : 2>/dev/null # do nothing
sudo /usr/local/sbin/kfcgi -d -r \
-s /usr/local/var/www/run/httpd.sock \
-p /usr/local/var/www \
-u `whoami` -U `whoami` -- /fcgi-bin/kernod
khttp_fcgi_test(3)
returns 0 rather than 1 when using a kfcgi(8)
variable-sized worker pool:
#include <stdio.h>
#include <sys/types.h>
#include <stdarg.h>
#include <stdint.h>
#include <kcgi.h>
int main(void) {
printf("%d\n", khttp_fcgi_test());
}
$ freebsd-version
12.0-RELEASE
$ pkg info -o kcgi
kcgi-0.12.1 www/kcgi
$ sudo kfcgi -d -s test.sock -p $PWD -n 1 -- ./repro
1
kfcgi 57318 - - worker unexpectedly exited
$ sudo kfcgi -d -s test.sock -p $PWD -r -n 1 -- ./repro
0
kfcgi 57321 - - worker unexpectedly exited
I have been using this library for over a year now for a hobby project and I have been very happy with it.
Today I am refactoring my CSS handling and hit a snag - I am trying to put all my CSS in the STYLE tag within the HEAD tag. I just learned about Child Combinators but when I try to use them, the '>' character becomes '>'. I am using khtml_puts() to output the css string after the STYLE tag. Is there a way to emit text without it getting html encoded?
Hi,
First, thank you for this awesome library. I enjoy it a lot.
I just realized there is no khttp_(v)printf
functions to format using printf(3) like style. If you mind, I'd like to add it unless you already had this in some TODO list.
I am using current master and I am compiling it with g++ 5.4.1 20160904
on Ubuntu 14.04. I use the sample with an additional cookie set:
struct kreq r;
const char *page = "index";
if (KCGI_OK != khttp_parse(&r, NULL, 0, &page, 1, 0))
return(EXIT_FAILURE);
khttp_head(&r, kresps[KRESP_STATUS], "%s", khttps[KHTTP_200]);
khttp_head(&r, kresps[KRESP_CONTENT_TYPE], "%s", kmimetypes[KMIME__MAX == r.mime ?
KMIME_APP_OCTET_STREAM : r.mime]);
const int seconds = 60 * 60 * 8;
khttp_head(&r, kresps[KRESP_SET_COOKIE],
"%s=%s; Path=/; max-age=%i; secure; HttpOnly", "sessionid", "0xdeadbeef", seconds);
khttp_body(&r);
khttp_puts(&r, "Hello, world!");
khttp_free(&r);
return(EXIT_SUCCESS);
When I curl -I
I get duplicated cookies (always 4 times)
HTTP/1.1 200 OK
date: Tue, 05 Dec 2017 10:35:42 GMT
server: Apache/2.4.7 (Ubuntu)
set-cookie: sessionid=0xdeadbeef; Path=/; max-age=28800; secure; HttpOnly
set-cookie: sessionid=0xdeadbeef; Path=/; max-age=28800; secure; HttpOnly
set-cookie: sessionid=0xdeadbeef; Path=/; max-age=28800; secure; HttpOnly
set-cookie: sessionid=0xdeadbeef; Path=/; max-age=28800; secure; HttpOnly
content-length: 13
keep-alive: timeout=5, max=100
content-type: text/html
X-BACKEND: apps-proxy
I expect the set-cookie header only once.
304 responses should not have a body. However, when kcgi enables gzip, it compresses the empty body and responds with a 14 byte gzip header.
Probably shouldn't compress empty body regardless of response code.
I was running make regress
on macOS High Sierra 10.13.5 (17F77), when my screen suddenly turned black before the entire operating system crashed, which was really a big surprise to me.
I managed to create a core dump, which you might find helpful. Let me know if further information is required.
test-abort-validator_2018-06-12-154348_sunqingyaos-MacBook-Air.crash.txt
Have a look here : stackoverflow
Thank you.
Hi,
I have to mention first that this is compiled and executed on an old raspberry pi ( armv6l ) running archlinux (4.14.27-1-ARCH), i can provide details about the system and so on if needed.
At first I was getting this error when launching make regress, on the test-fcgi-abort-validator file, here is what it looked like :
[xse@rpi kcgi-0.10.1]$ ./regress/test-fcgi-abort-validator
fcgi.c:685: application signalled
read: Connection reset by peer
fcgi_hdr_read: header
wrappers.c:184: child signal 31
[xse@rpi kcgi-0.10.1]$
Since everything else was building fine i thought it must have been an issue with curl or something like that, but then i realized i couln't launch kfcgi neither :
[xse@rpi http]$ sudo kfcgi -d -v -U http -u http -s /srv/http/run/httpd.sock -p / -- /srv/http/kcgi
fcgi.c:685: application signalled
wrappers.c:184: child signal 31
kfcgi[9083]: worker unexpectedly exited
wrappers.c:184: child signal 31
wrappers.c:184: child signal 31
wrappers.c:184: child signal 31
wrappers.c:184: child signal 31
[xse@rpi http]$
I'm a bit lost here maybe I'm doing something wrong and it's obvious but i don't see it. I tried a whole bunch of different parameters for kfcgi but nothing seems to do the trick.
Here is kfcgi's strace if it helps : https://gist.github.com/xse/2855184cafd3c03006aabe92ab91f55c
setgid32(33) = 0
setuid32(33) = 0
setuid32(0) = -1 EPERM (Operation not permitted)
I might be wrong and this is probably totally unrelated but maybe it's worth to mention it, i really don't know, could this be it ?
It might also be related to the fact that i didn't setup a proper chroot, but i just wanted to try and see if it worked, as the man says "use the root directory if you insist on being insecure."
The thing is that if that error was related to that i don't get why i get the exact same error on test-fcgi-abort-validator, that makes no sense to me.
By the way here is the result of "strace -ff on the test-fcgi-abort-validator".
The file /srv/http/kcgi is built as stated from here : https://kristaps.bsd.lv/kcgi/tutorial2.html minus a ")" on the while loop line that otherwise causes an error when compiling.
I'm hopping someone have an idea about how to solve this.
Although i'm fine with basics like modifying Makefile.configure to make it fit archlinux and so on, i'm stuck there and i can't find any solutions by myself at the moment, i'm too confused it seems like there is so many possible reasons, I'll keep looking into it tho.
If there is anything i can provide to narrow down the issue I'll be more than happy to do so.
Thank you for your time.
I have only need for a low volume of requests, and in fact there is a bottleneck in the next link in the chain (for simplicity of design) which the FCGI backend passes on to. So I have need for only one worker process.
Therefore I call kfcgi with -n 1 (I see the -N is useless without -r). However, it launches 3 worker processes. If I use -n 4, I get 12 workers.
Why is it doing something other than what I requested? It would be nice to not have the other processes using up memory when they are not used, as well as simplifying knowing which process ID is the active one, for potential debugging purposes (which doesn't seem to work yet, but that's a separate issue to investigate).
When compiling with GNU Make, I suddenly get an error: Makefile:309: *** missing separator.
Do you know what happens there, and can it be avoided by tweaking the makefile? This did not happen with previous version 0.10.7... and bsd make was not available for me at the time either. Hope this is not a necessary compatibility issue...
Hello,
This isn't a direct issue to kcgi but I think it could be valuable to add some information on how to use FastCGI with kcgi based programs on the official documentation.
kcgi makes uses of PATH_INFO which seems... to be poorly implemented in most web servers.
For the moment, I was unable to get fastcgi working with my kcgi program. I've tried the following but the program never had PATH_INFO correctly set:
$HTTP["host"] == "myhostname.tld" {
fastcgi.server = (
"/" => ((
"socket" => "/path/to/socket",
"broken-scriptfilename" => "enable",
"fix-root-scriptname" => "enable"
))
)
With nginx it works a bit better but there are not many documentation regarding the PATH_INFO handling.
I needed to add manually fastcgi_param PATH_INFO $uri
but not even sure if this is appropriate.
location / {
include /etc/nginx/fastcgi_params;
fastcgi_param PATH_INFO $uri;
fastcgi_pass unix:/home/markand/dev/paster2/paster.sock;
}
What do you think about adding a specific deployment page with those webservers?
fpclassify function is absent :(
I have no idea what that actually means. The /tmp folder is there, it also create a demo_app.sock file
root@orangepizero:/etc/demo_app# kfcgi -s /tmp/demo_app.sock -p /var/www -- /var/www/fcgi-bin/app
daemon: Bad file descriptor
Hello,
For some reasons I'm unable to build kcgi 0.12.5 anymore on Alpine Linux (aarch64+musl). It ends in error:
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -c sandbox-seccomp-filter.c
In file included from sandbox-seccomp-filter.c:46:
sandbox-seccomp-filter.c:77:34: error: '__NR_open' undeclared here (not in a function)
77 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:92:2: note: in expansion of macro 'SC_DENY'
92 | SC_DENY(open, EACCES),
| ^~~~~~~
sandbox-seccomp-filter.c:92:2: warning: missing initializer for field 'k' of 'const struct sock_filter' [-Wmissing-field-initializers]
In file included from sandbox-seccomp-filter.c:46:
/usr/include/linux/filter.h:28:8: note: 'k' declared here
28 | __u32 k; /* Generic multiuse field */
| ^
sandbox-seccomp-filter.c:80:34: error: '__NR_poll' undeclared here (not in a function)
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:124:2: note: in expansion of macro 'SC_ALLOW'
124 | SC_ALLOW(poll),
| ^~~~~~~~
sandbox-seccomp-filter.c:124:2: warning: missing initializer for field 'k' of 'const struct sock_filter' [-Wmissing-field-initializers]
In file included from sandbox-seccomp-filter.c:46:
/usr/include/linux/filter.h:28:8: note: 'k' declared here
28 | __u32 k; /* Generic multiuse field */
| ^
sandbox-seccomp-filter.c:80:34: error: '__NR_select' undeclared here (not in a function)
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:128:2: note: in expansion of macro 'SC_ALLOW'
128 | SC_ALLOW(select),
| ^~~~~~~~
sandbox-seccomp-filter.c:128:2: warning: missing initializer for field 'k' of 'const struct sock_filter' [-Wmissing-field-initializers]
In file included from sandbox-seccomp-filter.c:46:
/usr/include/linux/filter.h:28:8: note: 'k' declared here
28 | __u32 k; /* Generic multiuse field */
| ^
sandbox-seccomp-filter.c:77:34: warning: initialization of 'unsigned int' from 'const struct sock_filter *' makes integer from pointer without a cast [-Wint-conversion]
77 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:162:2: note: in expansion of macro 'SC_DENY'
162 | SC_DENY(open, EACCES),
| ^~~~~~~
sandbox-seccomp-filter.c:77:34: note: (near initialization for 'preauth_work[4].k')
77 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:162:2: note: in expansion of macro 'SC_DENY'
162 | SC_DENY(open, EACCES),
| ^~~~~~~
sandbox-seccomp-filter.c:77:34: error: initializer element is not constant
77 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:162:2: note: in expansion of macro 'SC_DENY'
162 | SC_DENY(open, EACCES),
| ^~~~~~~
sandbox-seccomp-filter.c:77:34: note: (near initialization for 'preauth_work[4].k')
77 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:162:2: note: in expansion of macro 'SC_DENY'
162 | SC_DENY(open, EACCES),
| ^~~~~~~
sandbox-seccomp-filter.c:162:2: warning: missing initializer for field 'k' of 'const struct sock_filter' [-Wmissing-field-initializers]
In file included from sandbox-seccomp-filter.c:46:
/usr/include/linux/filter.h:28:8: note: 'k' declared here
28 | __u32 k; /* Generic multiuse field */
| ^
sandbox-seccomp-filter.c:80:34: warning: initialization of 'unsigned int' from 'const struct sock_filter *' makes integer from pointer without a cast [-Wint-conversion]
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:189:2: note: in expansion of macro 'SC_ALLOW'
189 | SC_ALLOW(poll),
| ^~~~~~~~
sandbox-seccomp-filter.c:80:34: note: (near initialization for 'preauth_work[34].k')
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:189:2: note: in expansion of macro 'SC_ALLOW'
189 | SC_ALLOW(poll),
| ^~~~~~~~
sandbox-seccomp-filter.c:80:34: error: initializer element is not constant
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:189:2: note: in expansion of macro 'SC_ALLOW'
189 | SC_ALLOW(poll),
| ^~~~~~~~
sandbox-seccomp-filter.c:80:34: note: (near initialization for 'preauth_work[34].k')
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:189:2: note: in expansion of macro 'SC_ALLOW'
189 | SC_ALLOW(poll),
| ^~~~~~~~
sandbox-seccomp-filter.c:189:2: warning: missing initializer for field 'k' of 'const struct sock_filter' [-Wmissing-field-initializers]
In file included from sandbox-seccomp-filter.c:46:
/usr/include/linux/filter.h:28:8: note: 'k' declared here
28 | __u32 k; /* Generic multiuse field */
| ^
sandbox-seccomp-filter.c:80:34: warning: initialization of 'unsigned int' from 'const struct sock_filter *' makes integer from pointer without a cast [-Wint-conversion]
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:193:2: note: in expansion of macro 'SC_ALLOW'
193 | SC_ALLOW(select),
| ^~~~~~~~
sandbox-seccomp-filter.c:80:34: note: (near initialization for 'preauth_work[36].k')
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:193:2: note: in expansion of macro 'SC_ALLOW'
193 | SC_ALLOW(select),
| ^~~~~~~~
sandbox-seccomp-filter.c:80:34: error: initializer element is not constant
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:193:2: note: in expansion of macro 'SC_ALLOW'
193 | SC_ALLOW(select),
| ^~~~~~~~
sandbox-seccomp-filter.c:80:34: note: (near initialization for 'preauth_work[36].k')
80 | BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_ ## _nr, 0, 1), \
| ^~~~~
sandbox-seccomp-filter.c:193:2: note: in expansion of macro 'SC_ALLOW'
193 | SC_ALLOW(select),
| ^~~~~~~~
sandbox-seccomp-filter.c:193:2: warning: missing initializer for field 'k' of 'const struct sock_filter' [-Wmissing-field-initializers]
In file included from sandbox-seccomp-filter.c:46:
/usr/include/linux/filter.h:28:8: note: 'k' declared here
28 | __u32 k; /* Generic multiuse field */
| ^
*** Error code 1
Stop.
bmake: stopped in /build/kcgi-0.12.5
If I understand correctly, this is because there are no syscal for aarch64 as someone pointed out on other project
Unfortunately I'm unable to understand the purpose of this code and so I can't provide much help.
I'm seeing some strange behavior from the simple program below:
#include <stdint.h>
#include <stdlib.h>
#include <stdio.h>
#include <kcgi.h>
int
main(void)
{
struct kreq r;
struct kfcgi *fcgi;
if (khttp_fcgi_init(&fcgi, NULL, 0, NULL, 0, 0) != KCGI_OK)
return 1;
for (;;) {
if (khttp_fcgi_parse(fcgi, &r) != KCGI_OK) {
khttp_free(&r);
break;
}
khttp_head(&r, kresps[KRESP_STATUS], "%s", khttps[KHTTP_200]);
khttp_head(&r, kresps[KRESP_CONTENT_TYPE], "%s", kmimetypes[r.mime]);
khttp_body(&r);
khttp_puts(&r, "test\n");
khttp_free(&r);
}
khttp_fcgi_free(fcgi);
return 0;
}
This program produces the following http response when run on openbsd-current:
$ curl -i http://my.host/x
HTTP/1.1 200 OK
Connection: keep-alive
Date: Fri, 08 Apr 2016 15:38:14 GMT
Server: OpenBSD httpd
Transfer-Encoding: chunked
Status: 200 OK
Content-Type: text/html
test
$
There is a spurious line break before the Status header.
I'm still working on debugging this, but so far I'm not even sure whether the problem is in kfcgi or in httpd. Any suggestions would be appreciated. Thanks!
Hi,
I'm a fan of mustache templates and have found a C library (https://gitlab.com/jobol/mustach) that implements mustache. That lib comes bundled with a wrapper for json-c.
As best I can figure, writing a wrapper for kcgi's generated JSON code is a good way to use this library. A mustache wrapper implements the interface (https://gitlab.com/jobol/mustach/blob/master/mustach.h#L53-59):
struct mustach_itf {
int (*start)(void *closure);
int (*put)(void *closure, const char *name, int escape, FILE *file);
int (*enter)(void *closure, const char *name);
int (*next)(void *closure);
int (*leave)(void *closure);
};
where
* @start: Starts the mustach processing of the closure
* 'start' is optional (can be NULL)
*
* @put: Writes the value of 'name' to 'file' with 'escape' or not
*
* @enter: Enters the section of 'name' if possible.
* Musts return 1 if entered or 0 if not entered.
* When 1 is returned, the function 'leave' will always be called.
* Conversely 'leave' is never called when enter returns 0 or
* a negative value.
* When 1 is returned, the function must activate the first
* item of the section.
*
* @next: Activates the next item of the section if it exists.
* Musts return 1 when the next item is activated.
* Musts return 0 when there is no item to activate.
*
* @leave: Leaves the last entered section
Do you agree this is a sensible approach? And if so, how much work is involved?
kcgi.h
uses types from stdint.h
like uint16_t
but does not include it.
I am using Linux 4.8, and have added CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER.
I am using kcgi 0.10.7 connecting to Hiawatha 10.3 web server. I can get it working in CGI form, but not as FastCGI. I am using the tutorial0 and tutorial2 codes.
# kfcgi -p / -d -v -n 1 -N 1 -- /var/www/fcgi-bin/web-gui-backend
fcgi.c:754: fullreadfd: hangup (terminating)
wrappers.c:185: child signal 31
wrappers.c:185: child signal 31
kfcgi[380]: worker unexpectedly exited
It seems to be getting stuck in wrappers.c:649, in fullreadfd(). recvmsg is only returning 0 bytes from the socket.
} else if ((rc = recvmsg(fd, &msg, 0)) < 0) {
XWARN("recvmsg");
return(-1);
} else if (0 == rc)
return(0); <<<<<< doing this return
For what it's worth, hiawatha.conf looks like this:
FastCGIserver {
FastCGIid = FCGI
ConnectTo = /var/www/run/httpd.sock
Extension = cgi
}
UseFastCGI = FCGI
ExecuteCGI = yes
Why could it be having trouble using the UNIX file socket?
I am using a simple stripped-down Linux that only has the root user.
Adding "-s /var/www/run/httpd.sock" to kfcgi makes no difference.
The httpd.sock file does not exist when hiawatha is started up. It does appear when kfcgi is run and exits on failure. Who is supposed to be creating the socket file? kfcgi or the web server?
Also, who sets the filesystem jail? Do we chroot ourselves first? Or does kfcgi do it internally? (The doc seems to suggest the second.) For me, kfcgi does not seem to do it.
/var/www/fcgi-bin/web-gui-backend does of course exist.
# kfcgi -d -v -n 1 -N 1 -- /fcgi-bin/web-gui-backend
kfcgi[368]: execve: /fcgi-bin/web-gui-backend: No such file or directory
kfcgi[367]: worker unexpectedly exited
# kfcgi -d -v -n 1 -N 1 -- /var/www/fcgi-bin/web-gui-backend
kfcgi[372]: execve: /var/www/fcgi-bin/web-gui-backend: No such file or directory
kfcgi[371]: worker unexpectedly exited
# kfcgi -p / -d -v -n 1 -N 1 -- /var/www/fcgi-bin/web-gui-backend
(This now runs)
Hi there!
I'm having some problem installing kcgi-0.10.5 on macOS. Everything worked well except a "permission denied" error when I tried to install, so I sudo make install
and the installation finished without error. However, the man pages are still inaccessible to me.
kcgi-0.10.5|⇒ ./configure
config.log: writing...
configure.local: no (fully automatic configuration)
arc4random: yes
capsicum: no
err: yes
explicit_bzero: no
getprogname: yes
INFTIM: no
md5: no
memmem: yes
memrchr: no
memset_s: yes
PATH_MAX: yes
pledge: no
program_invocation_short_name: no
reallocarray: no
recallocarray: no
sandbox_init: yes
seccomp-filter: no
SOCK_NONBLOCK: no
strlcat: yes
strlcpy: yes
strndup: yes
strnlen: yes
strtonum: no
systrace: no
zlib: yes
__progname: yes
config.h: written
Makefile.configure: written
kcgi-0.10.5|⇒ make
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o kfcgi.o kfcgi.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o compats.o compats.c
ar rs libconfig.a compats.o
ar: creating archive libconfig.a
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -o kfcgi kfcgi.o libconfig.a
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o auth.o auth.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o child.o child.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o datetime.o datetime.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o fcgi.o fcgi.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o httpauth.o httpauth.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o kcgi.o kcgi.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o logging.o logging.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o output.o output.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o parent.o parent.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o sandbox.o sandbox.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o sandbox-capsicum.o sandbox-capsicum.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o sandbox-darwin.o sandbox-darwin.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o sandbox-pledge.o sandbox-pledge.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o sandbox-seccomp-filter.o sandbox-seccomp-filter.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o sandbox-systrace.o sandbox-systrace.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o template.o template.c
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o wrappers.o wrappers.c
ar rs libkcgi.a auth.o child.o datetime.o fcgi.o httpauth.o kcgi.o logging.o output.o parent.o sandbox.o sandbox-capsicum.o sandbox-darwin.o sandbox-pledge.o sandbox-seccomp-filter.o sandbox-systrace.o template.o wrappers.o compats.o
ar: creating archive libkcgi.a
/Library/Developer/CommandLineTools/usr/bin/ranlib: file: libkcgi.a(sandbox-seccomp-filter.o) has no symbols
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o kcgihtml.o kcgihtml.c
ar rs libkcgihtml.a kcgihtml.o
ar: creating archive libkcgihtml.a
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o kcgijson.o kcgijson.c
ar rs libkcgijson.a kcgijson.o
ar: creating archive libkcgijson.a
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o kcgixml.o kcgixml.c
ar rs libkcgixml.a kcgixml.o
ar: creating archive libkcgixml.a
cc -g -W -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wno-unused-parameter -Wno-deprecated-declarations -c -o kcgiregress.o kcgiregress.c
ar rs libkcgiregress.a kcgiregress.o
ar: creating archive libkcgiregress.a
kcgi-0.10.5|⇒ make install
mkdir -p /usr/local/lib
mkdir -p /usr/local/include
mkdir -p /usr/local/share/kcgi
mkdir -p /usr/local/man/man3
mkdir -p /usr/local/man/man8
mkdir -p /usr/local/sbin
install -m 0444 libkcgi.a libkcgihtml.a libkcgijson.a libkcgixml.a libkcgiregress.a /usr/local/lib
install -m 0444 kcgi.h kcgihtml.h kcgijson.h kcgixml.h kcgiregress.h /usr/local/include
install -m 0444 man/kcgi.3 man/kcgihtml.3 man/kcgijson.3 man/kcgiregress.3 man/kcgixml.3 man/kcgi_buf_write.3 man/kcgi_writer_disable.3 man/kcgi_strerror.3 man/khttp_body.3 man/khttp_fcgi_free.3 man/khttp_fcgi_init.3 man/khttp_fcgi_parse.3 man/khttp_fcgi_test.3 man/khttp_free.3 man/khttp_head.3 man/khttp_parse.3 man/khttp_template.3 man/khttp_write.3 man/khttpbasic_validate.3 man/khttpdigest_validate.3 man/kmalloc.3 man/kutil_epoch2str.3 man/kutil_invalidate.3 man/kutil_log.3 man/kutil_openlog.3 man/kutil_urlencode.3 man/kvalid_string.3 /usr/local/man/man3
install: /usr/local/man/man3/kcgi.3: Permission denied
make: *** [install] Error 71
kcgi-0.10.5|⇒ sudo make install
mkdir -p /usr/local/lib
mkdir -p /usr/local/include
mkdir -p /usr/local/share/kcgi
mkdir -p /usr/local/man/man3
mkdir -p /usr/local/man/man8
mkdir -p /usr/local/sbin
install -m 0444 libkcgi.a libkcgihtml.a libkcgijson.a libkcgixml.a libkcgiregress.a /usr/local/lib
install -m 0444 kcgi.h kcgihtml.h kcgijson.h kcgixml.h kcgiregress.h /usr/local/include
install -m 0444 man/kcgi.3 man/kcgihtml.3 man/kcgijson.3 man/kcgiregress.3 man/kcgixml.3 man/kcgi_buf_write.3 man/kcgi_writer_disable.3 man/kcgi_strerror.3 man/khttp_body.3 man/khttp_fcgi_free.3 man/khttp_fcgi_init.3 man/khttp_fcgi_parse.3 man/khttp_fcgi_test.3 man/khttp_free.3 man/khttp_head.3 man/khttp_parse.3 man/khttp_template.3 man/khttp_write.3 man/khttpbasic_validate.3 man/khttpdigest_validate.3 man/kmalloc.3 man/kutil_epoch2str.3 man/kutil_invalidate.3 man/kutil_log.3 man/kutil_openlog.3 man/kutil_urlencode.3 man/kvalid_string.3 /usr/local/man/man3
install -m 0444 man/kfcgi.8 /usr/local/man/man8
install -m 0555 kfcgi /usr/local/sbin
install -m 0444 template.xml sample.c samplepp.cc sample-fcgi.c sample-cgi.c /usr/local/share/kcgi
kcgi-0.10.5|⇒ man kcgi
No manual entry for kcgi
Here is my system info, and I'm using macOS High Sierra 10.13.5 (17F77)
kcgi-0.10.5|⇒ uname -a
Darwin 192.168.0.107 17.6.0 Darwin Kernel Version 17.6.0: Tue May 8 15:22:16 PDT 2018; root:xnu-4570.61.1~1/RELEASE_X86_64 x86_64
Let me know if you need further information.
I enter a number and send form but it does nothing.
I do not understand these lines in sendindex() of sample.c
cp = NULL == req->fieldmap[KEY_INTEGER] ? "" : req->fieldmap[KEY_INTEGER]->val; kasprintf(&page, "%s/%s", req->pname, pages[PAGE_INDEX]);
xiaoleidembp:kcgi richard$ ./configure
config.log: writing...
configure.local: no (fully automatic configuration)
arc4random: yes
b64_ntop: yes (with -lresolv)
capsicum: no
err: yes
explicit_bzero: no
getprogname: yes
INFTIM: no
md5: no
memmem: yes
memrchr: no
memset_s: yes
PATH_MAX: yes
pledge: no
program_invocation_short_name: no
reallocarray: no
recallocarray: no
sandbox_init: yes
seccomp-filter: no
SOCK_NONBLOCK: no
strlcat: yes
strlcpy: yes
strndup: yes
strnlen: yes
strtonum: no
sys_queue: yes
systrace: no
unveil: no
zlib: yes
__progname: yes
config.h: written
Makefile.configure: written
xiaoleidembp:kcgi richard$ make
Makefile:309: *** missing separator. Stop.
xiaoleidembp:kcgi richard$
I am running kcgi 0.10.11 installed from package on openbsd.
I've been using the khtml_ functions for years but recently took a look at the kxml_ functions to try to create an svg image. I hit a snag when trying to add multiple attributes to the root element.
Here is the setup:
resp_open(kr, KHTTP_200, "image/svg+xml");
const char* SVG_ELEMS[] = {
"svg",
};
enum SVGs {
SVG_SVG,
SVG__MAX
};
kxml_open(&xr, kr, SVG_ELEMS, SVG__MAX);
kxml_prologue(&xr);
kxml_pushattrs(&xr, SVG_SVG,
"xmlns", "http://www.w3.org/2000/svg",
//"version", "1.2",
//"baseProfile", "tiny",
//"width", "5cm",
//"height", "4cm",
//"viewBox", "0 0 100 100",
0);
where resp_open() does this:
void resp_open(struct kreq *kr, enum khttp http, std::string mime) {
khttp_head(kr, kresps[KRESP_STATUS], "%s", khttps[http]);
khttp_head(kr, kresps[KRESP_CONTENT_TYPE], "%s", mime=="" ? kmimetypes[kr->mime] : mime.c_str());
khttp_body(kr);
}
The above code will produce this:
<?xml version="1.0" encoding="utf-8" ?><svg xmlns="http://www.w3.org/2000/svg"></svg>
Okay, so far so good, but if I uncomment a line to add another attribute, kxml_pushattrs() does not return and the resulting page is solid white. I did some searching for examples of this in use and found kcaldav, but from what I can see, I am doing the same thing.
If you can see what dumb mistake I am probably making, please let me know. If not, what else can I do to track down this issue?
Thanks!
Hi,
I'm getting these errors
$ kfcgi -U www -u www -- /var/www/cgi-bin/kcaldav
daemon: No such file or directory
$ kfcgi -U root -u root -- /var/www/cgi-bin/kcaldav
kfcgi: managed to regain root privileges: aborting
What I have done so far:
Compiled and installed kcfg
Compiled and installed kcaldav including make installcgi
I have created ./run
under /var/www
. Without this I get daemon: No such file or directory
even when I run kfcgi as root
I've created the user:group www:www and executed
$ chown -R www:www /var/www
Any advice ?
I have been working on a cgi application (https://github.com/sirjorj/xwingcgi) using kcgi and running it on a local openbsd machine. It is essentially a web interface to a c++library that contains the data for a tabletop game. I am not a web dev and was kind of intrigued bu the BCHS idea so I gave it a try.
Anyway, tonight I decided to get that project to build in linux. After building and installing kcgi, I tried to build my project and got this error"
/usr/local/include/kcgi.h:486:14: error: ISO C++ forbids forward references to 'enum' types
I thought that was weird and compared that kcgi.h with the one on my openbsd machine and saw they were different. I then updated the package on my openbsd machine (0.10.0 -> 0.10.1) and now I have the same build error on the openbsd machine!
Is this a regression in kcgi or am I using it wrong?
My application uses kcgijson to send UTF-8-encoded strings.
When I send a string like "für", the browser receives "f\uFFFF\uFFFFr" ("ü" is actually 0xC3 0xBC).
I suspect the bug is in the control character escaping:
/* Encode control characters. */
if (cp[i] <= 0x1f) {
snprintf(enc, sizeof(enc), "\\u%.4X", cp[i]);
/* note: sizeof(enc) == 7, therefore %.4X can only print four bytes at max */
e = kcgi_writer_puts(r->arg, enc);
if (KCGI_OK != e)
return (e);
continue;
}
The problem here is that 0xC3 is (given that, on your architecture, char
is signed
) negative and is therefore falsely recognized as a control character. It is then expanded to an int and the upper two bytes of that int are printed (which are FFFF, as that int
has been created from a negative char
).
The function works correctly if a cast to unsigned char
is added:
static enum kcgi_err kjson_write(struct kjsonreq *r, const char *c
/* renamed parameter, for the sake of this example */,
size_t sz, int quot) {
enum kcgi_err e;
char enc[7];
size_t i;
const unsigned char *cp;
cp = (const unsigned char*)c;
khtml_attr(&r, KELEM_SPAN, KELEM_TITLE, s2.GetName().c_str(), KATTR__MAX);
renders as
<span onmousedown="Aggressor Assault Fighter">i</span>
hey,
on the site https://kristaps.bsd.lv/kcgi/kcgiregress.3.html i found the link to libcurl(3) to be a dead link.
I am not very familiar with sblg, so i do not know for now how to fix it.
--- sandbox-seccomp-filter.c.orig
+++ sandbox-seccomp-filter.c
@@ -106,7 +106,9 @@
SC_ALLOW(recvmsg),
#endif
SC_ALLOW(read),
+ SC_ALLOW(readv),
SC_ALLOW(write),
+ SC_ALLOW(writev),
SC_ALLOW(close),
#ifdef __NR_shutdown /* not defined on archs that go via socketcall(2) */
SC_ALLOW(shutdown),
@@ -158,7 +160,9 @@
SC_ALLOW(time),
#endif
SC_ALLOW(read),
+ SC_ALLOW(readv),
SC_ALLOW(write),
+ SC_ALLOW(writev),
SC_ALLOW(close),
#ifdef __NR_fcntl64 /* only noted on arm */
SC_ALLOW(fcntl64),
With this patch make regress
works, without already test-basic
fails.
Hello!
While adding error handling to a json sender, I noticed that sometimes kjson_close returns 0 on seemingly non-error states. After reading the code, it appears that kjson_close will return 1 if there were unclosed arrays, objects, or strings on the json stack, however if everything has been closed already, then the stack length is 0, and kjson_close() returns 0.
The documentation would lead developers to believe that 0 is an error state for all of the kcgijson functions that return an int, however this appears to not be the case for kjson_close.
Can you confirm that a 0 does not necessarily imply an error with this function?
Hi
I may have this all wrong, but shouldn't – for example – khttp_head(r, kresps[KRESP_STATUS],
in the following snippet, be khttp_head(&r, kresps[KRESP_STATUS],
?
.Bd -literal
khttp_head(r, kresps[KRESP_STATUS],
"%s", khttps[KHTTP_200]);
khttp_head(r, kresps[KRESP_CONTENT_TYPE],
"%s", kmimetypes[KMIME_TEXT_PLAIN]);
khttp_body(r);
khttp_puts(r, "Hello, world!\en");
.Ed
This diff might give a better picture of what I'm talking about:
--- /usr/local/man/man3/khttp_body.3 Tue Mar 27 17:34:48 2018
+++ khttp_body.3 Sun Jul 8 17:51:57 2018
@@ -105,34 +105,34 @@
is a context initialized by
.Xr khttp_parse 3 .
.Bd -literal
-khttp_head(r, kresps[KRESP_STATUS],
+khttp_head(&r, kresps[KRESP_STATUS],
"%s", khttps[KHTTP_200]);
-khttp_head(r, kresps[KRESP_CONTENT_TYPE],
+khttp_head(&r, kresps[KRESP_CONTENT_TYPE],
"%s", kmimetypes[KMIME_TEXT_PLAIN]);
-khttp_body(r);
-khttp_puts(r, "Hello, world!\en");
+khttp_body(&r);
+khttp_puts(&r, "Hello, world!\en");
.Ed
.Pp
To explicitly disable compression:
.Bd -literal
-khttp_head(r, kresps[KRESP_STATUS],
+khttp_head(&r, kresps[KRESP_STATUS],
"%s", khttps[KHTTP_200]);
-khttp_head(r, kresps[KRESP_CONTENT_TYPE],
+khttp_head(&r, kresps[KRESP_CONTENT_TYPE],
"%s", kmimetypes[KMIME_TEXT_PLAIN]);
-khttp_body_compress(r, 0);
-khttp_puts(r, "Hello, world!\en");
+khttp_body_compress(&r, 0);
+khttp_puts(&r, "Hello, world!\en");
.Ed
.Pp
To disable compression, but emit a compressed file:
.Bd -literal
-khttp_head(r, kresps[KRESP_STATUS],
+khttp_head(&r, kresps[KRESP_STATUS],
"%s", khttps[KHTTP_200]);
-khttp_head(r, kresps[KRESP_CONTENT_TYPE],
+khttp_head(&r, kresps[KRESP_CONTENT_TYPE],
"%s", kmimetypes[KMIME_TEXT_PLAIN]);
-khttp_head(r, kresps[KRESP_CONTENT_ENCODING],
+khttp_head(&r, kresps[KRESP_CONTENT_ENCODING],
"%s", "gzip");
-khttp_body_compress(r, 0);
-khttp_template(r, NULL, "compressed.txt.gz");
+khttp_body_compress(&r, 0);
+khttp_template(&r, NULL, "compressed.txt.gz");
.Ed
.Sh SEE ALSO
.Xr kcgi 3 ,
I can open a PR if you like, or feel free to just apply that diff if it's correct, I'm just not sure if I'm running into a mandoc formatting issue or something but I suspect that the &
s should be ok in a -literal
, but I could be wrong here.
kcgi fails to build with the following error on sparc64 with base gcc on OpenBSD. kcgi.h:635: error: wrong number of arguments specified for 'deprecated' attribute
. We applied the following fix to make it build on OpenBSD. Found by stsp@openbsd https://paste.apache.org/4fylo
the widely used fcgi C library is from the original fastcgi site, fcgi-devkit-2.4.0, how is kcgi different?
is it specifically for its own BCHS stack, or is it a generic library that can used with , say, nginx, lighttpd ,etc for fastcgi applications?
I've been trying to port this package to Gentoo and am currently running into a small problem with the tests which are failing for test-epoch2datetime
with the error
*** buffer overflow detected ***: terminated
[1] 2073 IOT instruction ./test-epoch2datetime
Would love to know what I am doing wrong or how to fix this.
Hi
I need to know how to access FastCGI parameters (e.g. DOCUMENT_ROOT
) in a fcgi handler.
I can see that these parameters are read while handling a request, but it seems they are not being passed to the actual handler, for example via struct kreq
.
Would you please help me with this?
It would be very appreciated.
Thanks
The configure script attempts to execute the compiled tests, which fail if configuring for a cross-compiled situation.
Upgrade oconfigure to not run the tests, since there is apparently nothing to be gained from running them:
kristapsdz/oconfigure@5e57079
Assume while writing data for a normal (e. g. HTTP 200 HTML) response one of the write fails with KCGI_ENOMEM
. How can I the free the kreq
without writing any HTTP data or discarding any data written previously? Because an error occurred I would now create a new empty response (e.g. HTTP 500).
I've tried to run the sample test from,
https://kristaps.bsd.lv/kcgi/kcgiregress.3.html
in a litte different way, instead of using the kcgi_regress_cgi I've used the kcgi_regress_fcgi function.
the result of the test, is that it may not be a very good example, for me the tests always return 0 even if it should not because the CURLINFO_RESPONSE_CODE is always 0, even if the fcgi script returns 200 or 404.
did you had the same experience?
I need some help using FastCGI with kfcgi
on macOS. By now I've built the executable, but I have no idea how to configure the server.
Actually I don't even know which server I'm using! macOS ships with a built-in server, but I'm not sure whether it's Apache or httpd, since both names seem to refer to the same thing.
~|⇒ apachectl -v
Server version: Apache/2.4.33 (Unix)
Server built: Apr 3 2018 18:00:56
~|⇒ httpd -v
Server version: Apache/2.4.33 (Unix)
Server built: Apr 3 2018 18:00:56
Anyway, I followed your tutorial on working with httpd, but the server complains about a syntax error when I added the server "me.local"
command into the configuration file /etc/apache2/httpd.conf
.
~|⇒ apachectl -t
AH00526: Syntax error on line 564 of /private/etc/apache2/httpd.conf:
Invalid command 'server', perhaps misspelled or defined by a module not included in the server configuration
I tried consulting the document but was soon overwhelmed by the huge amount of info: which module should I use, mod_fastcgi
, mod_fcgid
, or mod_proxy_fcgi
? It would be great if you could provide an example, since most people don't have a machine running OpenBSD by their hand (remote servers are an option, but the connections are usually quite slow!)
People might run into issues and their fixes could be collected in the wiki. I am trying to build on Linux (Ubuntu 16.04) and make regress
also need libcurl3-dev
and libbsd-dev
but the Makefile is not linking again libbsd
. Having some place to look might help.
This is a very well written and easy to use library. I was just wondering whether it is possible to read the body of a request and not just key/value pairs using kreq.fields. Is there something similar to khttp_write()
but for reading instead? I'm looking for something like khttp_read()
. I'm trying to implement a REST api, and I need this feature. I couldn't find anything like it in the documentation.
I noticed the khttp_parse.3 manpage mentioned that khttp_parse
and khttp_parsex
have the capability to read and validate key-value form data and opaque message bodies, but I didn't see anywhere that said how to access an opaque message body. Would an operation like this violate the sandboxing requirements?
Just put these together really quickly to allow me to experiment with some of the ideas from BCHS but using Nim instead of C. They compile on OSX, will try on OBSD later today - https://github.com/zacharycarter/kcgi.nim
I'm trying to deploy kfcgi
on FreeBSD. Using the example program from https://kristaps.bsd.lv/kcgi/tutorial2.html, I get this error:
$ freebsd-version
11.2-RELEASE-p4
$ pkg info kcgi | head -n 3
kcgi-0.10.7
Name : kcgi
Version : 0.10.7
$ CFLAGS=-I/usr/local/include LDFLAGS='-static -L/usr/local/lib' LDLIBS='-lkcgi -lz' make example
cc -I/usr/local/include -static -L/usr/local/lib example.c -lkcgi -lz -o example
$ sudo kfcgi -d -s kfcgi.sock -u www -U www -p . -- /example &
[1] 10342
$ sudo nc -U kfcgi.sock <<< ''
fcgi.c:248: FastCGI: read: Capabilities insufficient
wrappers.c:485: read: unexpected eof: read 0 of 8 bytes
child.c:1691: FastCGI: error reading frame size from control
child.c:2094: FastCGI: unrecoverable error at start sequence
fcgi.c:754: fullreadfd: hangup (terminating)
wrappers.c:182: child status 1
kfcgi[10345]: worker unexpectedly exited
fcgi.c:742: fcgi_waitread: exit request
fcgi.c:742: fcgi_waitread: exit request
fcgi.c:742: fcgi_waitread: exit request
child.c:2078: FastCGI: worker termination
child.c:2078: FastCGI: worker termination
fcgi.c:742: fcgi_waitread: exit request
child.c:2078: FastCGI: worker termination
child.c:2078: FastCGI: worker termination
The error is the same if I point an HTTPd to the socket. I'm just using nc
as an example to trigger the error.
libcurl now requires explicitly opting in to HTTP/0.9 to get the fcgi regress tests working.
This post is kinda long 'cause i edited it many times while figuring out the issue.
Hello,
Both systems were tests are failing are archlinux.
I've noted some similarities with #39
bmake regress
fails on test-fcgi-bigfile.
"By return value" the following tests are failing:
test-abort-validator and test-fcgi-abort-validator are both generating a coredump.
Platform: Linux rpi 4.19.79-1-ARCH #1 SMP PREEMPT Sat Oct 12 17:02:53 UTC 2019 armv6l GNU/Linux
Looking into the sandbox stuff, -DSANDBOX_SECCOMP_DEBUG
did not output anything at all but for the two validator tests strace shown two syscalls that were apparently blocked by it:
gettid:
[pid 23412] gettid( <unfinished ...>
[pid 23412] --- SIGSYS {si_signo=SIGSYS, si_code=SYS_SECCOMP, si_call_addr=0x76d0931c, si_syscall=__NR_gettid, si_arch=AUDIT_ARCH_ARM} ---
and tgkill:
[pid 24039] tgkill(24039, 24039, SIGABRT <unfinished ...>
[pid 24039] --- SIGSYS {si_signo=SIGSYS, si_code=SYS_SECCOMP, si_call_addr=0x76cf8330, si_syscall=__NR_tgkill, si_arch=AUDIT_ARCH_ARM} ---
After adding those two to the preauth_ctrl and preauth_work arrays in sandbox-seccomp-filter.c, the test now executes gettid and tgkill, then it calls rt_sigprocmask which was already authorized and it then receive a SIGABRT. (Before those changes it was a SIGSYS triggering the dump)
That doesn't change much but i guess abort is intended while the seccomp block isn't.
To try and fix the fcgi tests, i've also tried building everything with -fsigned-char since i'm getting a signedness warning when building on x86, with no success.
Platform: Linux krkrkr.org 5.3.7-arch1-1-ARCH #1 SMP PREEMPT Fri Oct 18 00:17:03 UTC 2019 x86_64 GNU/Linux
The same exact tests are returning != 0.
I did not notice any errors on debian, might be archlinux related. I just tested versions as far back as 0_10_3 which was working well back then, make regress failing there too.
EDIT: aight just verified on a fresh x86 debian, no issues here. Seems arch related.
Noted a sandbox message on arch, only saw that once so far tho so to me it seems totally unrelated:
ssh_sandbox_violation_control: unexpected system call (control) (arch:0xc000003e,syscall:219 @ 0x7f9245ad79b7)
ssh_sandbox_violation_worker: unexpected system call (worker) (arch:0xc000003e,syscall:219 @ 0x7f9245ad79b7)
On debian, this test executes and exits, on arch, it stays open in the background expecting something
Really does not look like to be a sandbox issue to me the logs don't show anything related to what was there for gettid and tgkill on arm.
Few strace logs debian vs arch are available here.
This looks to me like a socket issue.
write: Connection reset by peer
Trying to understand that, i've modified the fcgi-bigfile test, adding:
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
The outputs now differs quite a bit from arch & debian.
The main issue seems to be: (see also)
* Received HTTP/0.9 when not allowed
* Closing connection 0
write: Connection reset by peer
And indeed adding : curl_easy_setopt(curl, CURLOPT_HTTP09_ALLOWED, 1L);
to the fcgi bigfile test makes it pass on archlinux.
Have a good day!
I believe in Tutorial 2 the httpd is configured wrong. It's fcgi-bin not cgi-bin.
I'm interested in running valgrind on my FCGI application. I have kcgi v0.10.10.
I am able to run sudo kfcgi -p / -s /var/www/run/httpd.sock -d -v -n 1 -u www-data -U www-data -- /var/www/fcgi-bin/fcgi-app
, which works fine (never mind the filesystem jail being set to / for this purpose).
However, I prepend it with sudo valgrind --leak-check=full --trace-children=yes
, and it fails:
fcgi.c:689: fullreadfd: hangup (terminating)
wrappers.c:185: child signal 31
wrappers.c:185: child signal 31
It seems to suggest it's having trouble accessing the socket, possibly a permissions issue. Has anyone had successing with using valgrind?
This is rather low-priority---it's one line to "fix" in an application, and I'm not really certain it's a kcgi issue or rather a quirk of slowcgi with httpd or Nginx.
stderr is normally unbuffered by default. This appears to cause problems with both Nginx and OpenBSD's httpd writing these logs with one character on each line. Example from httpd's log:
1234:5678:5e63:f101:8286:f2ff:feba:b1c5 account [Sat, 01 Feb 2020 02:15:24 GMT] INFO
m
o
d
i
f
y
_
p
a
g
e
:
s
u
c
c
e
s
s
This happens using kutil_*
functions, but often seems to be triggered by an earlier "direct" write to stderr using fprintf()
or perror()
etc.
Using kutil_openlog()
to log to a separate file does not show this behaviour; kutil_openlog()
also enables line-buffered output. Using setvbuf(stderr, NULL, _IOLBF, 0)
on the default stderr output appears to solve the issue entirely.
I haven't had time to dig into slowcgi's code, but I will guess that the cause is likely here and is related to how they read stderr before sending over TCP. The char-by-char fputc
's in kutil_*
logging functions are flushed immediately and being translated into entire log messages by the web servers.
Perhaps line-buffering for stderr could be enabled by default for the kutil_*
logging functions; I can't think of any issues having buffered stderr in this context, other than the inefficiency of either calling setvbuf()
on every log call, or lazily enabling it and keeping track of whether it was enabled on each call.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.