jiangwenyuan / nuster Goto Github PK
View Code? Open in Web Editor NEWA high performance HTTP proxy cache server and RESTful NoSQL cache server based on HAProxy
License: Other
A high performance HTTP proxy cache server and RESTful NoSQL cache server based on HAProxy
License: Other
Hi, again :-)
Next trouble - Segmentation fault.
I'm doing a small load testing using https://github.com/swiftstack/ssbench
I make 100-200 parallel requests 10Mbyte. (Mostly GET)
If there is enough memory to cache requests - everything works fine.
As soon as the memory runs out, after several connections the program closes with an error.
The program SIGSEGV if it is hollowed when the cache is full.
[CACHE] [RES] Checking status code: PASS
[CACHE] To create
Program received signal SIGSEGV, Segmentation fault.
0x00005555556852f8 in nst_cache_create (ctx=0x555555c540d0, key=0x555555c5a4a0 "GET.HTTP.10.76.163.121:8080./v1/AUTH_........../tm10_000061_les4a2/s10_000062..", hash=7646496250242905949)
at src/nuster/cache/engine.c:646
646 ctx->element = entry->data->element;
Backtrace
(gdb) bt
#0 0x00005555556852f8 in nst_cache_create (ctx=0x555555c540d0, key=0x555555c5a4a0 "GET.HTTP.10.76.163.121:8080./v1/AUTH_groshev/tm10_000061_les4a2/s10_000062..", hash=7646496250242905949)
at src/nuster/cache/engine.c:646
#1 0x0000555555681c5e in _nst_cache_filter_http_headers (s=<optimized out>, filter=<optimized out>, msg=<optimized out>) at src/nuster/cache/filter.c:226
#2 0x00005555556597db in flt_analyze_http_headers (s=0x555555acc5b0, chn=0x555555acc600, an_bit=16777216) at src/filters.c:855
#3 0x00005555555eae91 in process_stream (t=t@entry=0x555555acc910) at src/stream.c:1983
#4 0x000055555566af77 in process_runnable_tasks () at src/task.c:229
#5 0x000055555561d0f3 in run_poll_loop () at src/haproxy.c:2423
#6 run_thread_poll_loop (data=data@entry=0x555555932fa0) at src/haproxy.c:2491
#7 0x0000555555582568 in main (argc=<optimized out>, argv=<optimized out>) at src/haproxy.c:3094
Is this a BUG report or FEATURE request?:
如果缓存文件带有set-cookies,其他用户命中缓存文件后有可能产生会话混乱。
What happened:
segfault
Environment:
Amazon Linux
kernel 4.14.138-114.102.amzn2.x86_64
Error:
[ 742.966122] traps: haproxy[2973] general protection ip:539fe6 sp:7ffe9a158bc0 error:0 in haproxy[400000+1a2000]
[ 2522.186309] haproxy[6629]: segfault at 48 ip 0000000000539888 sp 00007ffc2b5b7f40 error 4 in haproxy[400000+1a2000]
[ 3163.959450] haproxy[7456]: segfault at 48 ip 0000000000539888 sp 00007ffd76e6c3e0 error 4 in haproxy[400000+1a2000]
[ 3547.069017] traps: haproxy[8063] general protection ip:539fe6 sp:7ffcc61291e0 error:0 in haproxy[400000+1a2000]
Config file:
\# Production Nuster config
global
master-worker
nuster cache on data-size 128m uri /_nuster
nuster nosql off
log 127.0.0.1:514 local0
pidfile /var/run/haproxy.pid
maxconn 2000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats mode 644 level admin
tune.maxrewrite 32768
tune.bufsize 131072
defaults
log global
mode http
option httplog
option dontlognull
option http-buffer-request
option redispatch
timeout http-request 5s
timeout client 360s
timeout server 360s
timeout connect 10s
balance leastconn
frontend fe
bind *:80
acl stats_ip src 127.0.0.1/8
acl stats_path path_beg /_nuster
http-request deny if stats_path !stats_ip
default_backend retail
backend retail
nuster cache on
nuster rule br1 key method.uri.body
server s1 172.16.21.33:8080 check
server s2 172.16.1.67:8080 check
server s3 172.16.11.107:8080 check
Build info:
\# haproxy -vv
nuster version 3.0.0.19
Copyright (C) 2017-present, Jiang Wenyuan, <koubunen AT gmail DOT com >
HA-Proxy version 1.9.9 2019/07/23 - https://haproxy.org/
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-format-truncation -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wno-implicit-fallthrough -Wno-stringop-overflow -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1
Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.23 2017-02-14
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
h2 : mode=HTX side=FE|BE
h2 : mode=HTTP side=FE
<default> : mode=HTX side=FE|BE
<default> : mode=TCP|HTTP side=FE|BE
Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
Most of the sites will have a large files size as mine HTML is nearly 16KB (compressed)
Should Nuster includes more data size?
按照http cache的配置文件配置了
希望用来缓存hls点播,似乎不成功
后端是使用一个nginx做web服务器,播放hls点播文件。使用nuster做代理及缓存,似乎并没有缓存,而是直接回源了。
Implement replicating nuster node cache to other nodes
This is a FEATURE request:
Instead of using Nuster's cache (in-memory or disk persistent solutions), would you have any pointers if Memcached is to be used?
e.g.:
nuster cache on data-size 10m memcached address:11211
We might be able to contribute.
The documentation says:
nuster rule
syntax: nuster rule name [key KEY] [ttl TTL] [code CODE] [if|unless condition]
I made my rule:
nuster rule r1 key method.scheme.host.uri.header_range ttl 3600 code 200,206
And got the error:
haproxy -d -f nuster.cfg
[ALERT] 217/161716 (2690027) : parsing [nuster.cfg:29] : 'nuster r1': code already specified.
[ALERT] 217/161716 (2690027) : Error(s) found in configuration file : nuster.cfg
[ALERT] 217/161716 (2690027) : Fatal errors found in configuration.
Judging by the code, it should not work right.
https://github.com/jiangwenyuan/nuster/blob/v1.8.x/src/nuster/parser.c#L560
if(key != NULL) {
memprintf(err, "'%s %s': code already specified.", args[0], name);
goto out;
}
The result is that the code must be before the key.
And a similar bug with the TTL parameter.
The only working sequence: TTL, Code, KEY.
nuster rule r1 ttl 3600 code 200,206 key method.scheme.host.uri.header_range
We were only caching POSTs between the API server and the back end, but I am experimenting with adding Nuster caching for non POST traffic in front of the API (see config below)
What happened:
Nuster is caching POSTs when that is not enabled (backend api). Additionally, the response from the server is a different size (cache on vs off).
What you expected to happen:
Not to cache POSTs
Config file
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
timeout http-request 5s
timeout client 360s
timeout server 360s
timeout connect 10s
balance leastconn
frontend retail-fe
bind *:80
acl stats_ip src 127.0.0.1/8
acl stats_path path_beg /_nuster
http-request deny if stats_path !stats_ip
default_backend retail
backend retail
option http-buffer-request
nuster cache on
nuster rule br1 key method.uri.body
server s1 172.16.1.183:8080 check
server s2 172.16.21.41:8080 check
server s3 172.16.11.189:8080 check
frontend api-fe
bind *:8080
acl stats_ip src 127.0.0.1/8
acl stats_path path_beg /_nuster
http-request deny if stats_path !stats_ip
default_backend api
backend api
nuster cache on
nuster rule ar1 key method.uri
server s1 127.0.0.1:8081 check
Log extract
Sep 10 08:17:03 localhost haproxy[4830]: 172.16.0.119:7636 [10/Sep/2019:08:17:03.202] api-fe api/<NUSTER.CACHE.ENGINE> 0/0/0/0/2 200 127765 - - ---- 1/1/0/0/0 0/0 "POST /retail/calculate HTTP/1.1"
Sep 10 08:23:06 localhost haproxy[4830]: 172.16.20.112:36494 [10/Sep/2019:08:23:06.874] api-fe api/<NUSTER.CACHE.ENGINE> 0/0/0/0/1 200 127765 - - ---- 1/1/0/0/0 0/0 "POST /retail/calculate HTTP/1.1"
Environment:
Haproxy version:
Copyright (C) 2017-2019, Jiang Wenyuan, <koubunen AT gmail DOT com >
HA-Proxy version 2.0.10.18-4774cf-1 2019/09/03
Copyright 2000-2019 Willy Tarreau <[email protected]>```
Is this a BUG report or FEATURE request?:
BUG report
Environment:
Thank you very much for your work. A very interesting solution is to unite HAproxy with Cache and NoSQL.
BUG 1
a bug with a freeze request cache from the backend.
- Create in the settings 2 backend the first for static caching, the second for html hitting without caching.
- The page should have several static elements.
- Start nuster in debug mode.
- Open the browser in Inspector mode. Refresh the page several times, observe the speed of getting the page.
approximate config
global
nuster cache on uri /cache data-size 200m
frontend http_front
bind *:80
mode http
acl is_static path_end -i .jpg .png .gif .css .js .ico .txt
acl example.com hdr(host) -i example.com
use_backend static if is_static
use_backend bk1 if example.com
backend static
nuster cache on
nuster rule allstatic
server nginx1 127.0.0.1:1122 check
backend bk1
server srv1 127.0.0.1:3344
Debug nuster in freeze moment
00000040:example.com.srvcls[000a:adfd]
00000040:example.com.clicls[adfd:adfd]
00000040:example.com.closed[adfd:adfd]
00000041:example.com.srvcls[000c:adfd]
00000041:example.com.clicls[adfd:adfd]
00000041:example.com.closed[adfd:adfd]
Effects:
I had a freeze of feedback html response backend client from 10 seconds to infinity.
Judging by the nuster log he tried to find the backend cache by which the cache was not included.
![wait_req]
(https://user-images.githubusercontent.com/16289977/44980870-b8bb8200-af79-11e8-8dab-e9f7dca44e87.png)
BUG 2
I wanted to transfer the session from Redis to Nuster NoSql but I encountered 2 bugs.
- Create any key in NoSql
- Get it for example using the wrk utility or any other fast asynchronous method.
./wrk -t2 -c200 -d30s http://127.0.0.1/nosql/key1
See to the number of errors.
Effect: After a few requests, Nuster breack connections with the client.
BUG 3
Bug with NoSql + CPU utilization
Similar to the bug above, but you need to create and get the same key simultaneously.
Effect: In the uniprocessor version, the CPU will be fully utilized 100% and only restart Nuster will help.
In a lot of CPU, if you repeat the procedure several times, all the kernels except for one trampoline are 100% recycled and restart will be required.
If "nuster" consume fully the cache area of memory, I make it to save the cache on disk.
(Priority order : memory cache area -> disk cache area)
nuster should support purging via HTTP PURGE
to the same endpoint (with ACL; usually just matching over addr/netmask). Bonus points for globbing (*.jpg
).
What happened:
[CACHE] [RES] Checking if rule pass: [ALERT] 136/154924 (1) : Current worker #1 (6) exited with code 139 (Segmentation fault)
How to reproduce it (as minimally and precisely as possible):
nuster rule test ttl 60 if { urlp(filter[test]) -m found }
Description:
If i try to cache with this rule "urlp(filter[test]) -m found" nuster got segmentation fault.
Rules with { path_beg /* } works fine.
Thx for help
Is this a BUG report or FEATURE request?:
Bug
What happened:
The default cache key normalizes different requests to the same cache key
What you expected to happen:
Different requests would use different cache keys
How to reproduce it (as minimally and precisely as possible):
global
maxconn 100
cache on
defaults
mode http
timeout connect 5s
timeout client 5s
timeout server 5s
frontend myfrontend
bind :8888
default_backend mybackend
backend mybackend
# a http backend
http-request set-header Host httpbin.org
server s httpbin.org:80
filter cache
cache-rule all ttl 3600
Then make two requests that differ only in the position of the '?':
$ curl localhost:8888/get?name=hello
{
"args": {
"name": "hello"
},
"headers": {
"Accept": "*/*",
"Connection": "close",
"Host": "httpbin.org",
"User-Agent": "curl/7.54.0"
},
"origin": "x",
"url": "http://httpbin.org/get?name=hello"
}
$ curl localhost:8888/?getname=hello
{
"args": {
"name": "hello"
},
"headers": {
"Accept": "*/*",
"Connection": "close",
"Host": "httpbin.org",
"User-Agent": "curl/7.54.0"
},
"origin": "x",
"url": "http://httpbin.org/get?name=hello"
}
Anything else we need to know?:
This is similar to what caused this issue: https://rdist.root.org/2009/05/20/amazon-web-services-signature-vulnerability/
Does this replace HAProxy? Completely - as in every feature? What's in the README is a teaser, but does't say much.
有考虑缓存http内容到硬盘吗?全是内存应该不够用.能缓存到SSD就好了
例如返回一个头
N-Cache: HIT
或
N-Cache: MISS
Is it possible to implement ESI support?
http cache is not much powerful without it.
Is this a BUG report or FEATURE request?:
作为地图瓦片服务缓存时同名文件名缓存问题(url不一样),默认缓存是按文件名缓存,而不是按url缓存?
地图瓦片服务不同的level下,会有文件名相同的情况,那样经过缓存后,在不同的level下,访问文件名相同,但URL是不一样的,只会返回首次缓存的内容!
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Have a plan to support docker?
global
nuster cache on data-size 100m
defaults
retries 3
option redispatch
option dontlognull
timeout client 30s
timeout connect 30s
timeout server 30s
frontend npm
bind *:2080
mode http
acl is_regist hdr_end(host) -i npm.ourwill.cn:2080
acl is_cdn hdr_end(host) -i cdn.npm.taobao.org:2080
use_backend regist if is_regist
use_backend cdnserver if is_cdn
default_backend regist
backend regist
mode http
nuster cache on
nuster rule all_s2 ttl 3600 code 200,302 key scheme.host.uri
http-request set-header Host registry.npm.taobao.org
server a1 registry.npm.taobao.org:443 ssl verify none
backend cdnserver
mode http
nuster cache on
nuster rule all ttl 86400 code 200,302 key scheme.host.uri
http-request set-header Host cdn.npm.taobao.org
server a2 cdn.npm.taobao.org:443 ssl verify none
haproxy -d -f haproxy.cfg
see log foud all the backend got the same key
有没有详细的配置文档哟
Is the versioning strategy of this project to maintain parity with HAProxy? I guess I'm trying to understand if this is truly just using HAProxy's core for each version then apply Nuster features as patches on top of it.. Or is it a completely divergent fork?
对于非专业人士,希望推出更简单的安装配置方法
能支持透明代理么
I set response header with "Last-Modified" in an application.
Therefore, when the application's content expires on client browser, it inquires the application with request header "If-Modified-Since".
I want to use nuster as cache server, but I don't know to compare "Last-Modified" with "If-Modified-Since" by using nuster.
Is it possible to do the above comparison?
Have a Plan to support “coap”?
^.+\.(jpg|jpeg|gif|bmp|png|ico)$
缓存所有图片怎么操作?
如题
nuster 目前支持 HTTP/2.0么 ? 如果是https 和nginx 性能对比如何?
Explanation:
We are trying to figure out if nuster can cache GraphQL request/response, since it is a POST-request.
What we get:
CLIENT => NUSTER => GRAPHQL-API => NUSTER => CLIENT
What we wish:
CLIENT => NUSTER => CLIENT
global
nuster cache on data-size 100M
master-worker
nuster nosql off
tune.bufsize 165536
defaults
mode http
retries 3
option redispatch
timeout client 30s
timeout connect 30s
timeout server 30s
log global
frontend fe
bind *:8080
use_backend graphqlBe if { path_beg /graphql }
default_backend frontendBe
backend graphqlBe
option http-buffer-request
nuster cache on
nuster rule allGraphql key method.body
server s1 b:4444
backend frontendBe
nuster cache on
nuster rule allFrontend
server s3 f:3333
Did anybody manage to solve graphQL cache with Nuster?
Environment:
BUG: In the statistics the 'req_hit' does not count the cache hits in "disk only" mode. The value is always 0.
A typical output of the statistics page after many thousands of successful cached requests:
curl localhost/_nuster_stats
GLOBAL
global.nuster.cache.data.size: 1073741824
global.nuster.cache.dict.size: 1073741824
global.nuster.cache.uri: /_nuster_stats
global.nuster.cache.purge_method: PURGE
global.nuster.cache.stats.used_mem: 0
global.nuster.cache.stats.req_total: 151190
global.nuster.cache.stats.req_hit: 0
global.nuster.cache.stats.req_fetch: 25190
global.nuster.cache.stats.req_abort: 0
Hi,
I would report a major bug of the "disk only" and "disk sync" modes. Potentially the "disk async" mode also suffers from it but that mode was unstable for me so I would not report that.
The issue is that the proxy does not release the opened file handles and this way after a time it eats up all the resources of the host and stops working. The proxy not just loses a file handle when the cache file is created but seems every request which hits the cached file can cause a lost handle.
My haproxy.conf relevant parts:
global
log 127.0.0.1:601 local0 debug
user root
group root
maxconn 200000
ulimit-n 800000
stats maxconn 10
stats socket /var/run/haproxy.sock mode 666 level admin
nuster nosql off
nuster cache on data-size 1G dict-size 1G disk-cleaner 10 dict-cleaner 10 data-cleaner 10 dir /tmp/nuster_cache uri /_nuster_stats
master-worker
defaults
log global
mode http
option httplog
option forwardfor
option dontlognull
timeout connect 30s
timeout client 1m
timeout server 1m
timeout check 1m
timeout tunnel 1m
timeout queue 1m
frontend http_frontend
bind *:80
mode http
option http-server-close
reqadd X-Forwarded-Proto:\ http
default_backend store_http
http-request set-var(txn.path) path
http-request set-var(txn.request_method) method
backend store_http
mode http
option httpclose
option forwardfor
balance leastconn
acl nuster_getfile var(txn.path) -m beg /getfile/
acl nuster_is_get var(txn.request_method) -m str GET
nuster cache
nuster rule getfile key method.path disk only ttl 2m if nuster_getfile nuster_is_get
How I tested:
I uploaded 100 files and downloaded 4 times all the files.
Before the test on the host there were allocated handles around 2700:
[root@host ~]# sysctl fs.file-nr
fs.file-nr = 2736 0 4875015
After the test run I waited some minutes till all the cached files were already deleted when I checked the allocated handles which was about 400 times more than before:
[root@host ~]# sysctl fs.file-nr
fs.file-nr = 3072 0 4875015
With 'lsof' I see such lines:
haproxy 16321 root 12r REG 0,119 5851055 906042563 /tmp/nuster_cache/6/65/657faf93738efb1a/469281041867902-16d38a70ec2 (deleted)
haproxy 16321 root 13r REG 0,119 1317431 2785096705 /tmp/nuster_cache/0/04/0411a0fea8723f15/41180aa80720705-16d38a6fad7 (deleted)
haproxy 16321 root 14r REG 0,119 3507205 2080421347 /tmp/nuster_cache/c/c9/c9af37ca1d6baa2e/89892140092b8028-16d38a709b6 (deleted)
Counting them:
[root@host ~]# lsof | grep nuster_cache | wc -l
400
So seems if there is a request on a cached file than there will be a lost file handle for every request. If I pick up a hash from the lsof log than I can see:
[root@host ~]# lsof | grep 57e7f3f2d52cc982
haproxy 16321 root 105r REG 0,119 6213323 2415968515 /tmp/nuster_cache/5/57/57e7f3f2d52cc982/42c52052d50c4082-16d38a70b45 (deleted)
haproxy 16321 root 155r REG 0,119 6213323 2415968515 /tmp/nuster_cache/5/57/57e7f3f2d52cc982/42c52052d50c4082-16d38a70b45 (deleted)
haproxy 16321 root 159w REG 0,119 6213323 2415968515 /tmp/nuster_cache/5/57/57e7f3f2d52cc982/42c52052d50c4082-16d38a70b45 (deleted)
haproxy 16321 root 422r REG 0,119 6213323 2415968515 /tmp/nuster_cache/5/57/57e7f3f2d52cc982/42c52052d50c4082-16d38a70b45 (deleted)
If I disable the rule "getfile" with the following command than there is no more lost file handle generated anymore:
curl -X POST -H "name: getfile" -H "state: disable" localhost/_nuster_stats
If I kill the haproxy process than all the allocated handles are freed up.
If I use nuster in "disk off" mode than I don't see any lost handle.
Environment:
I hope the above information is enough to identify the root cause and fix the issue because seems the disk persistence modes currently cannot be used for long term without periodically restarting the proxy service.
Saw a comment link to https://github.com/jiangwenyuan/nuster/wiki/Web-cache-server-performance-benchmark%3A-nuster-vs-nginx-vs-varnish-vs-squid
In additional, I've been running H2O HTTP Server if you are curious how its fare and it's faster than Nginx on my VPS with maximum tuning, how does it compare to nuster in your beef hardware? I'm curious. TechEmpower Benchmark seem to shown it's blazing fast.
I can provide you the h2o.conf file if you need it.
Our long term plan is to cache POST (request/response) with Nuster. Note that our requests may be 100KB, but the response may be up to 10MB.
So as a PoC, I'm trying to cache our Swagger usage. It's working in Nginx, but I'd prefer to use Nuster (Haproxy). I'm getting 503's even when trying to bring up the swagger-ui.html.
Please assist to diagnose my config:
global
nuster cache on data-size 100m
nuster nosql off
log /dev/log local0 debug
tune.bufsize 65536
defaults
log global
mode http
option httplog
option dontlognull
option http-buffer-request
timeout http-request 65000
timeout client 60s
timeout server 60s
timeout connect 60s
frontend fe
bind *:80
default_backend be
backend be
nuster cache on
nuster rule all
server s1 127.0.0.1:8085
I'm running this in AWS on a t3.xlarge using Docker version 18.06.1-ce with the following command:
docker run -it --rm --name Nuster -v /dev/log:/dev/log -v /root/nuster.cfg:/etc/nuster/nuster.cfg:ro -p 80:80 nuster/nuster
Is this a BUG report or FEATURE request?: Feature
What happened: I like your benchmarks, recently posted on lobste.rs, and I like your thoroughness in documenting your configs. However, would it be feasible to do something a bit bigger and/or more sophisticated than hello world?
I would like to point out that identifiers like “_PROTO_PATTERN_H
” and “_TYPES_CACHE_H
” do not fit to the expected naming convention of the C language standard.
Would you like to adjust your selection for unique names?
I want to get statistics data by host.
However, only all statistics data can be acquired now.
Have a plan support to do the above matters?
How can I perform a health check when using nuster as a http cache proxy? I only want to check the health of the proxy and not the server I want to cache.
Six months later, I decided to look at Nuster again and, like the first time, I just spent my time and became disillusioned. The potentially interesting project never got a normal installer and did not get rid of children's bugs.
Apparently faster implement caching in HAProxy.
During testing, the following bug was found; the latest version v2.1.2.19 does not cache anything but html. For example, an attempt to cache js, css led to the addition of garbage in the cache until it was clogged by 80%.
global
nuster cache on data-size 200m
debug
frontend fe
bind *:80
default_backend be
backend be
mode http
nuster cache on
nuster rule all
server s1 192.168.111.211:8111
-------------------------------------------------------------------------------------------
First GET
------------------------------------------------------------------------------------------
00000000:fe.accept(0004)=0008 from [127.0.0.1:56846] ALPN=<none>
00000000:be.clireq[0008:ffffffff]: GET /static/css/vendor/bootstrap.min.css HTTP/1.1
00000000:be.clihdr[0008:ffffffff]: User-Agent: curl/7.29.0
00000000:be.clihdr[0008:ffffffff]: Host: 127.0.0.1
00000000:be.clihdr[0008:ffffffff]: Accept: */*
[CACHE] Checking rule: all
[CACHE] Calculate key: method.scheme.host.uri.
[CACHE] Got key: GETHTTP127.0.0.1/static/css/vendor/bootstrap.min.css
[CACHE] Checking key existence: NOT EXIST
[CACHE] [REQ] Checking if rule pass: PASS
00000000:be.srvrep[0008:0009]: HTTP/1.1 200 OK
00000000:be.srvhdr[0008:0009]: Content-Type: text/css
00000000:be.srvhdr[0008:0009]: Last-Modified: Mon, 25 Jul 2016 12:53:28 GMT
00000000:be.srvhdr[0008:0009]: Content-Length: 121200
00000000:be.srvhdr[0008:0009]: Accept-Ranges: bytes
00000000:be.srvhdr[0008:0009]: SERVER: hdesign/0.0.1
00000000:be.srvhdr[0008:0009]: Cache-Control: max-age=0
00000000:be.srvhdr[0008:0009]: Expired: Wed, 01 May 2019 15:14:52 GMT
00000000:be.srvhdr[0008:0009]: Date: Thu, 02 May 2019 15:14:52 GMT
[CACHE] [RES] Checking status code: PASS
**[CACHE] To create**
00000001:be.clicls[adfd:ffffffff]
00000001:be.closed[adfd:ffffffff]
------------------------------------------------------------------------------------------
Sec GET
-----------------------------------------------------------------------------------------
00000002:fe.accept(0004)=0008 from [127.0.0.1:56850] ALPN=<none>
00000002:be.clireq[0008:ffffffff]: GET /static/css/vendor/bootstrap.min.css HTTP/1.1
00000002:be.clihdr[0008:ffffffff]: User-Agent: curl/7.29.0
00000002:be.clihdr[0008:ffffffff]: Host: 127.0.0.1
00000002:be.clihdr[0008:ffffffff]: Accept: */*
[CACHE] Checking rule: all
[CACHE] Calculate key: method.scheme.host.uri.
[CACHE] Got key: GETHTTP127.0.0.1/static/css/vendor/bootstrap.min.css
[CACHE] Checking key existence: NOT EXIST
[CACHE] [REQ] Checking if rule pass: PASS
00000002:be.srvrep[0008:0009]: HTTP/1.1 200 OK
00000002:be.srvhdr[0008:0009]: Content-Type: text/css
00000002:be.srvhdr[0008:0009]: Last-Modified: Mon, 25 Jul 2016 12:53:28 GMT
00000002:be.srvhdr[0008:0009]: Content-Length: 121200
00000002:be.srvhdr[0008:0009]: Accept-Ranges: bytes
00000002:be.srvhdr[0008:0009]: SERVER: hdesign/0.0.1
00000002:be.srvhdr[0008:0009]: Cache-Control: max-age=0
00000002:be.srvhdr[0008:0009]: Expired: Wed, 01 May 2019 15:15:51 GMT
00000002:be.srvhdr[0008:0009]: Date: Thu, 02 May 2019 15:15:51 GMT
[CACHE] [RES] Checking status code: PASS
**[CACHE] To create**
00000003:be.clicls[adfd:ffffffff]
00000003:be.closed[adfd:ffffffff]
As you can see, the shred creates correctly, but when it is repeated, it does not find it and creates again what and where it is incomprehensible, but the memory flows away.
I say goodbye to this.
Hi,
2 days ago I installed Nuster on my server ubuntu, with image docker nuster/nuster:2.0.8.18.
Here is my configuration:
global
maxconn 4096
user root
group root
daemon
log 127.0.0.1 local0 debug
tune.ssl.default-dh-param 2048
nuster cache on data-size 200m
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
option http-server-close
option forwardfor
maxconn 2048
timeout connect 5s
timeout client 15min
timeout server 15min
frontend http
bind *:80
mode http
bind *:443 ssl crt /etc/nuster/certs/xxx.com.pem
reqadd X-Forwarded-Proto:\ https
http-request redirect prefix http://%[hdr(host),regsub(^www\.,,i)] code 301 if { hdr_beg(host) -i www. }
#ACL letsencrypt
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
use_backend letsencrypt-backend if letsencrypt-acl
#ACL API
acl bm-api_www hdr_beg(host) -i www.xxx.com
acl bm-api hdr_beg(host) -i xxx.com
use_backend srvs_bm-api if bm-api_www
use_backend srvs_bm-api if bm-api
backend srvs_bm-api
nuster cache on
nuster rule fixtures ttl 10 if { path_beg /xxx/data }
redirect scheme https if !{ ssl_fc }
server srvs_bm-api_dev_1 8080
This morning nuster started to send empty response for cache content. I was obliged to restart my docker.
Do you have any idea about this problem?
thank you
I try use nuster for cache openstack swift.
It have REST API for requesting list of container.
If I use standard python-swift client.
Starting at the second attempt listing goes into an infinite loop.
Example request:
swift -A http://10.10.10.10:8080/auth/v1.0 -U testuser:admin -K password list test
As a result, we get two HTTP requests:
1. GET /v1/AUTH_testuser/test?format=json
2. GET /v1/AUTH_testuser/test?format=json&marker=main2.conf
First request: return json with list of files.
Second request: We make sure that there are no more files. Requesting the listing for the last file from the previous query. In a normal situation, an empty JSON is returned.
I saw tcpdump, for second request returned JSON from first request - python-swift going in loops.
If you interrupt the connection and recreate the second query using the CURL the correct answer is returned - '[]'
If you look at the nuster logs, there is an endless towel or the second parameter is lost:
...
0000000c:swift-cluster.accept(0004)=000c from [10.76.180.236:41450] ALPN=<none>
0000000c:swift-cluster.clireq[000c:ffffffff]: GET /v1/AUTH_testuser/test?format=json HTTP/1.1
0000000c:swift-cluster.clihdr[000c:ffffffff]: Host: 10.76.163.121:8080
0000000c:swift-cluster.clihdr[000c:ffffffff]: user-agent: python-swiftclient-3.4.0
0000000c:swift-cluster.clihdr[000c:ffffffff]: accept-encoding: gzip
0000000c:swift-cluster.clihdr[000c:ffffffff]: x-auth-token: AUTH_tk9112b306a60241168c8621b9d066b781
[CACHE] Checking rule: r1
[CACHE] Calculate key: method.scheme.host.header_range.path.delimiter.query.param_marker.
GET.HTTP.10.76.163.121:8080./v1/AUTH_testuser/test.?.format=json.
[CACHE] Got key: GET.HTTP.10.76.163.121:8080./v1/AUTH_testuser/test.?.format=json.
[CACHE] Checking key existence: EXIST
[CACHE] Hit
0000000c:swift-cluster.srvrep[000c:ffffffff]: HTTP/1.1 200 OK
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Content-Length: 670
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Object-Count: 4
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Accept-Ranges: bytes
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Storage-Policy: gold
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Last-Modified: Tue, 07 Aug 2018 05:42:38 GMT
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Bytes-Used: 12269314
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Timestamp: 1533551033.27059
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Content-Type: application/json; charset=utf-8
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Trans-Id: txd8c490debd2f4bc0a627d-005b69a2f1
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Openstack-Request-Id: txd8c490debd2f4bc0a627d-005b69a2f1
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Date: Tue, 07 Aug 2018 13:47:29 GMT
...
00000057:swift-cluster.clireq[000c:ffffffff]: GET /v1/AUTH_testuser/test?format=json&marker=main2.conf HTTP/1.1
00000057:swift-cluster.clihdr[000c:ffffffff]: Host: 10.76.163.121:8080
00000057:swift-cluster.clihdr[000c:ffffffff]: user-agent: python-swiftclient-3.4.0
00000057:swift-cluster.clihdr[000c:ffffffff]: accept-encoding: gzip
00000057:swift-cluster.clihdr[000c:ffffffff]: x-auth-token: AUTH_tk9112b306a60241168c8621b9d066b781
00000057:swift-cluster.srvrep[000c:ffffffff]: HTTP/1.1 200 OK
00000057:swift-cluster.srvhdr[000c:ffffffff]: Content-Length: 670
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Object-Count: 4
00000057:swift-cluster.srvhdr[000c:ffffffff]: Accept-Ranges: bytes
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Storage-Policy: gold
00000057:swift-cluster.srvhdr[000c:ffffffff]: Last-Modified: Tue, 07 Aug 2018 05:42:38 GMT
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Bytes-Used: 12269314
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Timestamp: 1533551033.27059
00000057:swift-cluster.srvhdr[000c:ffffffff]: Content-Type: application/json; charset=utf-8
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Trans-Id: txd8c490debd2f4bc0a627d-005b69a2f1
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Openstack-Request-Id: txd8c490debd2f4bc0a627d-005b69a2f1
00000057:swift-cluster.srvhdr[000c:ffffffff]: Date: Tue, 07 Aug 2018 13:47:29 GMT
00000058:swift-cluster.clireq[000c:ffffffff]: GET /v1/AUTH_testuser/test?format=json&marker=main2.conf HTTP/1.1
00000058:swift-cluster.clihdr[000c:ffffffff]: Host: 10.76.163.121:8080
00000058:swift-cluster.clihdr[000c:ffffffff]: user-agent: python-swiftclient-3.4.0
00000058:swift-cluster.clihdr[000c:ffffffff]: accept-encoding: gzip
00000058:swift-cluster.clihdr[000c:ffffffff]: x-auth-token: AUTH_tk9112b306a60241168c8621b9d066b781
00000058:swift-cluster.srvrep[000c:ffffffff]: HTTP/1.1 200 OK
00000058:swift-cluster.srvhdr[000c:ffffffff]: Content-Length: 670
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Object-Count: 4
00000058:swift-cluster.srvhdr[000c:ffffffff]: Accept-Ranges: bytes
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Storage-Policy: gold
00000058:swift-cluster.srvhdr[000c:ffffffff]: Last-Modified: Tue, 07 Aug 2018 05:42:38 GMT
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Bytes-Used: 12269314
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Timestamp: 1533551033.27059
00000058:swift-cluster.srvhdr[000c:ffffffff]: Content-Type: application/json; charset=utf-8
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Trans-Id: txd8c490debd2f4bc0a627d-005b69a2f1
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Openstack-Request-Id: txd8c490debd2f4bc0a627d-005b69a2f1
00000058:swift-cluster.srvhdr[000c:ffffffff]: Date: Tue, 07 Aug 2018 13:47:29 GMT
...
Etc. in logs not [CACHE]
In the normal situation, the response to the second request is logged as follows:
0000056b:swift-cluster.accept(0004)=000a from [10.76.180.236:41862] ALPN=<none>
0000056b:swift-cluster.clireq[000a:ffffffff]: GET /v1/AUTH_testuser/test?format=json&marker=main2.conf HTTP/1.1
0000056b:swift-cluster.clihdr[000a:ffffffff]: Host: 10.76.163.121:8080
0000056b:swift-cluster.clihdr[000a:ffffffff]: User-Agent: curl/7.61.0
0000056b:swift-cluster.clihdr[000a:ffffffff]: Accept: */*
0000056b:swift-cluster.clihdr[000a:ffffffff]: Accept-Encoding: gzip
0000056b:swift-cluster.clihdr[000a:ffffffff]: X-Auth-Token: AUTH_tk9112b306a60241168c8621b9d066b781
[CACHE] Checking rule: r1
[CACHE] Calculate key: method.scheme.host.header_range.path.delimiter.query.param_marker.
[CACHE] Got key: GET.HTTP.10.76.163.121:8080./v1/AUTH_testuser/test.?.format=json&marker=main2.conf.main2.conf.
[CACHE] Checking key existence: EXIST
[CACHE] Hit
0000056b:swift-cluster.srvrep[000a:ffffffff]: HTTP/1.1 200 OK
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Content-Length: 2
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Container-Object-Count: 4
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Accept-Ranges: bytes
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Storage-Policy: gold
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Last-Modified: Tue, 07 Aug 2018 05:42:38 GMT
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Container-Bytes-Used: 12269314
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Timestamp: 1533551033.27059
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Content-Type: application/json; charset=utf-8
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Trans-Id: txb64e1f0088aa44739aa2c-005b69a467
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Openstack-Request-Id: txb64e1f0088aa44739aa2c-005b69a467
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Date: Tue, 07 Aug 2018 13:53:43 GMT
0000056c:swift-cluster.clicls[000a:ffffffff]
0000056c:swift-cluster.closed[000a:ffffffff]
Hi Is there easy way to use it with Ngxin and wordpress? Thank you.
I'm thinking on the idea if Nuster could communicate with backend microservices using WebSocket protocol or messagepack instead of HTTP for low latency and less overhead?
Is this a BUG report or FEATURE request?:
FEATURE request
I'll support zabio3.
Last-Modified support and cache search would be nice.
I'd like to be able to extend TTL cache records at the level of the rules.
Idea in what
Cache private user page for a user session for example for 5 minutes, if the user repeatedly accesses the page within 5 minutes, extend its record TTL for another 5 minutes.
Perhaps this can already be somehow realized, but I did not have the patience or brains to understand how :)
I want to search cache content by url.
I am very interested in "nuster". (Especially, cache purging by regex : ) )
I am really sorry to have asked you so much.
I'm trying to play around with Nuster in front of nginx/varnish and I've stumbled upon a issue where if a key is set, the next hit doesn't return any output back to the client.
The first request goes all good, but the second one (which should return the cached content) just hangs and doesn't output anything resulting in a non-working site.
It seems to only be when using Varnish as backend, and also sometimes on static files (a .woff file for example after a ctrl+f5)
I was wondering if I'm doing something wrong or this is a bug? This is preventing me from considering it for general use so I hope you can point me in the right direction :)
Nuster config:
global
log 127.0.0.1 local0
nuster cache on data-size 250m uri /_nuster
maxconn 1000000
daemon
nbproc 2
tune.maxaccept -1
tune.ssl.default-dh-param 2048
tune.h2.max-concurrent-streams 1000
user nginx
group nginx
debug
defaults
retries 3
maxconn 1000000
option redispatch
option dontlognull
option http-keep-alive
timeout client 300s
timeout connect 300s
timeout server 300s
http-reuse always
mode http
log global
frontend frontend
bind *:80
bind *:443 ssl crt /etc/ssl/certs/cert.pem alpn h2,http/1.1
#redirect http to https
redirect scheme https code 301 if !{ ssl_fc }
rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubDomains;\ preload
rspadd X-Frame-Options:\ DENY
acl is_get method GET HEAD
use_backend cache unless !is_get
default_backend nocache
backend cache
nuster cache on
nuster rule all ttl 10s
server varnish01 /var/run/varnish.sock
http-request add-header X-Forwarded-Proto https if { ssl_fc }
http-response set-header X-Nuster-Cache ON
backend nocache
server nginx01 /var/run/nginx.sock
http-request add-header X-Forwarded-Proto https if { ssl_fc }
http-response set-header X-Nuster-Cache OFF
the output of Nuster:
First request:
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace
Using epoll() as the polling mechanism.
[CACHE] on, data_size=262144000
00000000:frontend.accept(0005)=0009 from [1.1.1.1:60308] ALPN=h2
00000000:frontend.clireq[0009:ffffffff]: GET / HTTP/1.1
00000000:frontend.clihdr[0009:ffffffff]: user-agent: curl/7.61.0
00000000:frontend.clihdr[0009:ffffffff]: accept: */*
00000000:frontend.clihdr[0009:ffffffff]: host: www.domain.com
[CACHE] Checking rule: all
[CACHE] Calculate key: method.scheme.host.uri.
[CACHE] Got key: GET.HTTPS.www.domain.com./.
[CACHE] Checking key existence: NOT EXIST
[CACHE] [REQ] Checking if rule pass: PASS
00000000:cache.srvrep[0009:000a]: HTTP/1.1 200 OK
00000000:cache.srvhdr[0009:000a]: Server: nginx/1.14.0
00000000:cache.srvhdr[0009:000a]: Date: Wed, 31 Oct 2018 12:54:18 GMT
00000000:cache.srvhdr[0009:000a]: Content-Type: text/html; charset=UTF-8
00000000:cache.srvhdr[0009:000a]: X-Powered-By: PHP/7.2.11
00000000:cache.srvhdr[0009:000a]: X-Pingback: https://www.domain.com/cms/xmlrpc.php
00000000:cache.srvhdr[0009:000a]: Link: <https://www.domain.com/>; rel=shortlink
00000000:cache.srvhdr[0009:000a]: X-Varnish: 393226
00000000:cache.srvhdr[0009:000a]: Age: 0
00000000:cache.srvhdr[0009:000a]: Via: 1.1 varnish (Varnish/6.0)
00000000:cache.srvhdr[0009:000a]: Accept-Ranges: bytes
00000000:cache.srvhdr[0009:000a]: Content-Length: 25245
00000000:cache.srvhdr[0009:000a]: Connection: keep-alive
[CACHE] [RES] Checking status code: PASS
[CACHE] To create
00000001:frontend.clicls[0009:ffffffff]
00000001:frontend.closed[0009:ffffffff]
Second request, should be cached now:
[CACHE] Calculate key: method.scheme.host.uri.
[CACHE] Got key: GET.HTTPS.www.domain.com./.
[CACHE] Checking key existence: EXIST
[CACHE] Hit
and just sits there and does nothing anymore until killed.
I hope you can point me in the right direction, awesome project!
After I upgraded Nuster to v2.0.5.18 one of my ACL dont work anymore.
I used following ACL Command to cache all content with a specific Content-Type in my case "image/jpeg" and "image/png".
acl CacheContentTypeRule res.hdr(Content-Type) -f /etc/nuster/file_pattern_for_cache.txt
nuster rule resCache ttl 3600 if CacheContentTypeRule
#cat /etc/nuster/file_pattern_for_cache.txt
image/jpeg
image/png
But after the upgrade it seems the ACL dont work anymore. The othere ACLs are still working correctly.
Environment:
I saw there was some "Fix incorrect acl check" fixes in the version. So is this a bug or maybe a missconfiguration in the ACL on my site?
Thx,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.