Giter Club home page Giter Club logo

ngx_upstream_jdomain's Introduction

ngx_upstream_jdomain

An asynchronous domain name resolution module for nginx upstream.

This module allows you to use a domain name in an upstream block and expect the domain name to be dynamically resolved so your upstream may be resilient to DNS entry updates.

The module does not perform DNS resolution automatically on some interval. Instead, the DNS resolution needs to be prompted by a request for the given upstream. If nginx serves a connection bound for a jdomain upstream, and the configured interval has elapsed, then the module will perform a DNS lookup.

The module is compatible with other upstream scope directives. This means you may populate an upstream block with multiple jdomain directives, multiple server directives, keepalive, load balancing directives, etc. Note that unless another load balancing method is specified in the upstream block, this module makes use of the default round robin load balancing algorithm built into nginx core.

Important Note: Should an alternate load balancing algorithm be specified, it must come before the jdomain directive in the upstream block! If this is not followed, nginx will crash during runtime! This is because many other load balancing modules explicitly extend the built in round robin, and thus end up clobbering the jdomain initialization handlers, since jdomain is technically a load balancer module as well. While this may not be the case with all load balancer modules, it's better to stay on the safe side and place jdomain after.

Important Note: Due to the non blocking nature of this module and the fact that its DNS resolution is triggered by incoming requests, the request that prompts a lookup will actually still be forwarded to the upstream that was resolved and cached before the DNS lookup happens. Depending on the scenario, this could result in a one off failure when changing the states of upstreams. This is important to keep in mind to ensure graceful transitions of your upstreams.

This repository is a fork of a repository originally authored by wdaike. As that project is no longer maintained, this repository aims to be its successor and is now several features ahead.

Installation

Build nginx with this repository as a static or dynamic module.

./configure --add-module=/path/to/this/directory
make
make install

Usage

resolver 8.8.8.8; # Your Local DNS Server

# Basic upstream using domain name defaulting to port 80.
upstream backend_01 {
	jdomain example.com;
}

# Basic upstream specifying different port.
upstream backend_02 {
	jdomain example.com port=8080;
}

# Upstream with a backup server to use in case of host not found or format
# errors on DNS resolution.
upstream backend_03 {
	server 127.0.0.2 backup;
	jdomain example.com;
}

# Upstream which will use backup for any and all DNS resolution errors.
upstream backend_04 {
	server 127.0.0.2 backup;
	jdomain example.com strict;
}

server {
	listen 127.0.0.2:80;
	return 502 'An error.';
}

Synopsis

Syntax: jdomain <domain-name> [port=80] [max_ips=4] [interval=1] [strict]
Context: upstream
Attributes:
	port:       Backend's listening port.                                      (Default: 80)
	max_ips:    IP buffer size. Maximum number of resolved IPs to cache.       (Default: 4)
	interval:   How many seconds to resolve domain name.                       (Default: 1)
	ipver:      Only addresses of family IPv4 or IPv6 will be used if defined  (Default: 0)
	strict:     Require the DNS resolution to succeed and return addresses,
	            otherwise marks the underlying server and peers as down and
	            forces use of other servers in the upstream block if there
	            are any present. A failed resolution can be a timeout, DNS
	            server failure, connection refusals, response with no
	            addresses, etc.

See https://www.nginx.com/resources/wiki/modules/domain_resolve/ for details.

Development

Prerequisites

To facilitate local development and enable you to build and test the module, you'll need some tools.

  • Docker: to provide an environment to easily reproduce ideal conditions for building and testing.
  • act: to simulate executing github actions workflows locally to save you from pushing commits just to watch the CI fail.
  • rust: dependency of cargo-make.
  • cargo-make: to run common development tasks such as building, testing, and formatting code.

Task Runner

cargo-make is an advanced task runner that will enabled you to easily perform common development operations like formatting the code, building the module, running the test suite, and running code analysis. You can see the task definitions in the file Makefile.toml. Installing cargo-make will result in a standalone executable called makers as well as a cargo extension which can be executed via cargo make. As this project is not a rust crate, it is recommended to simply use makers.

Also note that for simplicity's sake, the task runner uses docker to run all tasks. This means the build binary is not targetting your host platform.

Default Task

To add value, the default task (ie. simply running makers alone) will begin an interactive bash session inside the docker container used for this project.

This should help with debugging and general workflow.

Formatting

Incorrectly formatted code will cause the github actions linting job to fail. To avoid this, you can run the format task before pushing new changes, like so:

makers format

This formatting is performed by a tool called clang-format. You can find the config options for this defined in the file ./.clang-format.

Building

You can build nginx with the module by running the build task, like so:

makers build

This will output a ./bin/ directory, which will contain the nginx source for the version of nginx defined in the file ./.env as well as an nginx binary at ./bin/sbin/nginx. You add the directories in ./bin/workdir/src/ to your editor's includes path so facilitate local development.

Static Code Analysis

You can run a static analysis on the code via the analyse task:

makers analyse

This analysis is performed by a tool called clang-tidy. You can find the config options for this defined in the file ./.clang-tidy.

Testing

You can run the test suite using the test task, like so:

makers test

Debugging

We can use valgrind and gdb on nginx from inside the container.

First open an interactive shell in the container with:

$ makers

We'll use that session to run valgrind:

$ valgrind --vgdb=full --vgdb-error=0 /github/workspace/bin/static/nginx -p/github/workspace/t/servroot -cconf/nginx.conf
==15== Memcheck, a memory error detector
==15== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==15== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==15== Command: /github/workspace/bin/static/nginx -p/github/workspace/t/servroot -cconf/nginx.conf
==15==
==15== (action at startup) vgdb me ...
==15==
==15== TO DEBUG THIS PROCESS USING GDB: start GDB like this
==15==   /path/to/gdb /github/workspace/bin/static/nginx
==15== and then give GDB the following command
==15==   target remote | /usr/lib64/valgrind/../../bin/vgdb --pid=15
==15== --pid is optional if only one valgrind process is running
==15==

Next, find the container identifier so we can open another session inside it:

$ docker ps
CONTAINER ID        IMAGE                                     COMMAND             CREATED             STATUS              PORTS                    NAMES
55fab1e069ba        act-github-actions-nginx-module-toolbox   "bash"              4 seconds ago       Up 3 seconds        0.0.0.0:1984->1984/tcp   serene_newton

Use either the name or ID to execute a bash session inside the container:

$ docker exec -it serene_newton bash

We'll use this session to start gdb and target the valgrind gdb server we started in the other session:

$ gdb /github/workspace/bin/static/nginx
GNU gdb (GDB) Red Hat Enterprise Linux 8.0.1-30.amzn2.0.3
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /github/workspace/bin/static/nginx...done.
(gdb)

From the gdb prompt, target the valgrind process and begin debugging:

(gdb) target remote | /usr/lib64/valgrind/../../bin/vgdb --pid=15
Remote debugging using | /usr/lib64/valgrind/../../bin/vgdb --pid=15
relaying data between gdb and process 15
warning: remote target does not support file transfer, attempting to access files from local filesystem.
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
0x0000000004000ef0 in _start () from /lib64/ld-linux-x86-64.so.2
Missing separate debuginfos, use: debuginfo-install glibc-2.26-35.amzn2.x86_64
(gdb)

Running GitHub Actions

With act, you can simulate the workflow that will run on GitHub servers once you push changes.

There is more than one job in the main workflow, so you need to specify the test job when you run act. For example, you can use this command to run the code format validation:

act -vj lint

Note that the lint job does not format your code, it only checks that the formatting is as expected.

Also note that -v is used to enable verbose mode to give more visibility on everything act is doing.

The jobs you can (and should) run locally are lint, build, analyse, and test. The test job depends on the output from the build job. To keep the output from the build job, you can add the -b flag to act, or you may simply use the task runner to build.

Known Issues

At the moment? None! 🎉

If you discover a bug or have a question to raise, please open an issue.

Original Author

wdaike [email protected] (https://github.com/wdaike), Baidu Inc.

ngx_upstream_jdomain's People

Contributors

itpp16 avatar nicholaschiasson avatar pohchallenge avatar rs avatar splitice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ngx_upstream_jdomain's Issues

Support configurability of the NGX_RESOLVE_ errors to use fallback for

Currently, we have hardcoded the cases of NGX_RESOLVE_FORMERR and NGX_RESOLVE_NXDOMAIN as errors for which we always use the fallback. It would be more interesting to allow the configuration to select which resolve errors should always use the fallback. This could possibly be an enhancement of the strict attribute.

See here for list of errors: http://lxr.nginx.org/source/xref/nginx/src/core/ngx_resolver.h#27

#define NGX_RESOLVE_FORMERR   1
#define NGX_RESOLVE_SERVFAIL  2
#define NGX_RESOLVE_NXDOMAIN  3
#define NGX_RESOLVE_NOTIMP    4
#define NGX_RESOLVE_REFUSED   5
#define NGX_RESOLVE_TIMEDOUT  NGX_ETIMEDOUT

error: missing initializer for field ‘sin_family’ of ‘struct sockaddr_in’

I got an error when building and I'm not sure why

-o objs/addon/src/ngx_http_upstream_jdomain_module.o \
../ngx_upstream_jdomain/src/ngx_http_upstream_jdomain_module.c

../ngx_upstream_jdomain/src/ngx_http_upstream_jdomain_module.c:110:15: error: missing initializer for field ‘sin_family’ of ‘struct sockaddr_in’ [-Werror=missing-field-initializers]
static struct sockaddr_in NGX_JDOMAIN_INVALID_ADDR_SOCKADDR_IN = { };
^
In file included from /usr/include/bits/socket.h:151:0,
from /usr/include/sys/socket.h:39,
from src/os/unix/ngx_linux_config.h:44,
from src/core/ngx_config.h:26,
from ../ngx_upstream_jdomain/src/ngx_http_upstream_jdomain_module.c:2:
/usr/include/netinet/in.h:242:5: note: ‘sin_family’ declared here
_SOCKADDR_COMMON (sin);
^
cc1: all warnings being treated as errors
make[1]: *** [objs/addon/src/ngx_http_upstream_jdomain_module.o] Error 1
make[1]: Leaving directory `/root/nginx-1.18.0'
make: *** [build] Error 2

Do not use fallback on erroneous hostname resolutions

As an alternative to #9, it would be preferable to disregard the fallback usage in the case of hostname resolution failures (timeouts or other such network failures). That way, the there would be no outage at all in the case of exceptional cases, and the fallback would be used only on valid DNS lookups where the record no longer resolves any peers. This could/should still be configurable as described in this comment on #9.

Linting

Add a linting job and integrate a style validation process with the automated workflow to fail workflows which do not pass.

Fix versioning

The semver version bump on merge requests is always a minor version (the default) due to the misuse of the github.head_ref variable in the workflows of merge commits on master.

github.head_ref is only useful on PR workflows, so we need to find a way to get the name of the ref that was merged into master from a workflow triggered by merging a PR to master... I would like to do it in a robust way if possible, otherwise I suppose there's no issue just using a little scripting and parsing git log to determine what kind of semver bump to apply 🤷.

wdaike/ngx_upstream_jdomain#7

Clone of wdaike/ngx_upstream_jdomain#7.

I expect this could be a rather large change, as it represents changing the underlying way the module caches resolved IP addresses so that they are provided to the upstream via server. I think in doing this, it opens the door to a bigger change to expose the features exposed by the server directive through the jdomain directive. This could potentially be an entire design (breaking) change of the jdomain directive to effectively wrap the server directive, allowing jdomain to support all the same attributes server has, with the added functionality jdomain offers.

Allow nginx startup without doing initial DNS lookup

We should support a flag per jdomain instance indicating if we explicitly want nginx to do the DNS lookup or not on startup.

This has implications on memory management of course, but this could be a very important improvement.

Currently, the way nginx is forced to do a DNS lookup for each jdomain occurrence on startup, the startup time can become very very excessively long if the nginx config includes many jdomain directives (as is the case in my own production config now...).

If we were to allow nginx to start without doing the initial lookup, then nginx could start up very snappy as it usually does, and defer the lookups to later. This is effectively taking the fallback (backup server, as of jdomain 1.0) mechanism to the next step, so I think it shouldn't be that difficult to implement.

Test step sometimes halts and times out

From time to time, the test step, specifically when running prove, will come to a halt and cause the step to time out and fail.

This produces false failures and is very annoying if occurring on a workflow on master branch, since there's no way to re-run workflows to show the build actually was good.

Not able to resolve the VPC endpoint of AWS ES service

Hi,

I have setup Nginx with ngx_upstream_jdomain to point to a VPC endpoint exposed over AWS Elasticsearch Service.

  resolver 127.0.0.34;
  
  upstream backend {
    jdomain xx.xx.xx.xx.com port=443 max_ips=1 interval=20 strict;
    keepalive 24;
  }
  
  location / {
     proxy_pass https://backend;
  }

But in case I am changing the data nodes of the cluster triggering a blue/green deployment and assigning a new IP address to the VPC endpoint, Nginx is not able to connect to the new endpoint.

I am using:
Nginx - 1.19.2
Jdomain - 1.1.5

Tooling for local development

It would be nice to have some tooling for this project for executing jobs, such as building or running tests.

Consider npm, cargo-make, or alternatives.

Improve README.md

Document the project better.

  • Features: all supported directive attributes and what they do.
  • Development tools to use for local development: docker, act, etc.
  • General instructions for local development: building, testing, running github actions locally, etc.

Make sure alternative server is up in logic for strict

When applying the strict fallback, we only check if the upstream has other servers but not if any of those other servers are actually up. This could be problematic if any go down (if there are only bad jdomain servers for example).

To solve this, instead of just looking at the count of servers, we should quickly loop through the servers and check for server->down, breaking the moment we find one with false.

Supported Nginx Versions

What versions of Nginx support this module? On the latest version (v1.21.6) I keep seeing the following error:

module "/etc/nginx/modules/ngx_http_upstream_jdomain_module.so" version 1021006 instead of 1016001 in /etc/nginx/nginx.conf:2

I'm not sure how best to troubleshoot this, or if support for this module only runs up to a certain older version of nginx. I've tested this and it works currently on v1.18.0.

Change DNS query trigger to timer event basis

This module would really be more clean if the trigger for DNS query was on a timer event. That way, all jdomain DNS would be self updating and not require any traffic to an upstream just in order to keep it up to date with the DNS record.

DNS resolution state bug

Hello,

There is a bug in how resolve.status is set.

When nginx actually does the lookup, ngx_http_upstream_jdomain_resolve_handler is called from polled events which works fine.

But when TTL is set to automatic (no valid argument for resolver directive) and if it is very high (like 30+ seconds, maybe even less), ngx_http_upstream_jdomain_resolve_handler is called from cached values in ngx_resolve_name_locked at line 669.

Screenshot_20220511_142203

As you can see the function stack in bottom left, ngx_http_upstream_init_jdomain_peer directly calls ngx_http_upstream_jdomain_resolve_handler.

The bug is caused when above happens. As ngx_http_upstream_jdomain_resolve_handler sets NGX_JDOMAIN_STATUS_DONE for instance resolve status, it returns to ngx_http_upstream_init_jdomain_peer which sets NGX_JDOMAIN_STATUS_WAIT as the last step in the loop.

This bug is irreversible as once this happens and instance gets set to NGX_JDOMAIN_STATUS_WAIT, it never does another lookup again and stays stuck with old peer address.

I think fixing this bug will resolve #60 and #61

Use shared memory

Would be better to share jdomain state among all workers in order to save on redundant DNS queries and also keep all workers in sync when an update occurs.

If using fallback, should stop respecting the interval period

The fallback becomes the peer to use in the case of DNS resolution failure. In many cases, this can be due to a sporadic failure, thus it becomes undesirable to use the fallback address for the full interval duration. Allowing one or a few errors is acceptable, but allowing errors for several seconds, possibly minutes, due to an interval configured to match a DNS TTL, this can be very bad.

In the case where the fallback address is being used, we should attempt the DNS resolution until it succeeds.

Test against newer health check module

I think we should try to prefer this newer health check module. It is noted that it is still in development, however, it has support for prometheus output and also nginx stream module.

Update the ./.github/actions/nginx-module-toolbox/Dockerfile and ./scripts/build.sh files to make nginx build against this healthcheck module rather than the current one. One thing to be careful of is the patches. I know this new repo uses a weird naming convention for those patches...

num peerps does not match max_ips

Hey @nicholaschiasson, firstly thanks for the module here.

I'm looking through the code, trying to understand the reason for this block of code below:

if (j != instance[i].conf.max_ips) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "ngx_http_upstream_jdomain_module: num peerps does not match max_ips");
return NGX_ERROR;
}

What's the reason that peer pointers & max_ips have to be the same? What I'm seeing is that if an upstream server has 2 separate ports such as below:

upstream test {
  jdomain backend_1 port=1000 max_ips=2;
  jdomain backend_1 port=2000 max_ips=2;
}

Starting up nginx will trigger the num peerps does not match max_ips error message.

Changing the upstream to max_ips=1 doesn't work either, e.g.

upstream test {
  jdomain backend_1 port=1000 max_ips=1;
  jdomain backend_1 port=2000 max_ips=1;
}

will also yield the same num peerps does not match max_ips error message.

Trying to understand if this is an intended design constraint or unintended behavior. I'm happy to contribute & help out if you'd like.

Improve `least_conn` test case

In t/004.compatibility_nginx.t, test 3 is supposed to use jdomain with the least_conn algorithm, but the test is actually quite weak, making it not obvious to determine is the least_conn algorithm is even properly working.

The test only makes single connections at a time to the test nginx server, resulting in the load balancer effectively using round robin, which is indeed a valid behaviour of least_conn, but not representative of a live scenario where the peer is actually chosen based on least connected peers.

This test case should be improved to simulated many simultaneous connections to the upstream block so we can really see least_conn in action with jdomain upstreams.

Reporting a vulnerability

Hello!

I hope you are doing well!

We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called Private vulnerability reporting, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.

Can you enable it, so that we can report it?

Thanks in advance!

PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository

Potential issue with jdomain - keep seeing requests sending to old upstream after DNS update

Hey Folks,

We are seeing some weird issue after we upgrade jdomain to the latest release.

We have a very dynamic upstream which DNS updated quite often. We have been using jdomain to help with the upstream IP resolving. After upgrade, we are starting to see the issue that request keeps sending to old upstream after DNS update. We have no idea what could be wrong.

Any help will be appreciated,

Our upstream nginx config setup is pretty simple:

upstream upstream-upstream {
    jdomain xxx.xxx.xxx.xxx port=xx;
    keepalive 256;
  }

we are using openresty with version 1.19.3.1

Stale DNS Lookup Issue

We currently run are using ngx_upstream_jdomain release 1.4.0 as a forwarding proxy for talking to some downstream services that have rotating IP addresses. We run this in Kubernetes clusters in multiple cloud regions. When IP addresses rotate for a subset of our downstreams (for example today in 2/3 regions in 5/24 pods) we see

2023/08/29 08:32:14 [error] 12#12: ngx_http_upstream_jdomain_module: resolver failed, "www.example.com" (110: Operation timed out)

This issue does not recover itself, and we stick on a historic IP address for the service. Our jdomain configs look like

resolver 8.8.8.8 8.8.4.4;

upstream example{
        keepalive 32;
        keepalive_requests 100;
        keepalive_timeout 60s;
        jdomain www.example.com port=443  interval=60;
}

We've experienced the issue on both nginx-1.20.1 and 1.23.3.

Any help on recommended next steps or debugging would be appreciated. The error is a timeout error when talking to the DNS server, so I did wonder does ngx_upstream_jdomain try and re-establish a connection to DNS servers when there's a connection issue?

Add support for retrying github workflows

After reading some more documentation, it seems it may be possible to trigger workflows from comments on pull requests.

We could add pull_request_comment to the list of on events, and then validate the comment body matches some string like "retry" or something like that.

Evaluate script or complex values in domain

Not 100% sure how to accomplish this yet, but the goal would be to be able to do something like the following:

upstream test {
    server 127.0.0.1:11111 backup;
    jdomain this-is-${some_variable}-an-example.com;
}

This domain name resolution would be guaranteed to fail on load, but I believe it should work at runtime using the context of each request. The reason this becomes unfeasible but not impossible is due to state management. I believe we would need a growable hash of the state object per evaluation of the domain name with its variables.

Some documentation on how to achieve this:

Build failed with openssl 1.1.1g

Hi, I've been trying to build nginx with this jdomain module but have been encountering the following errors.
Believe that it's caused by incompatibility with openssl 1.1.1.

-o nginx-1.18.0/addon/src/ngx_http_upstream_jdomain.o
/root/ngx_upstream_jdomain/src/ngx_http_upstream_jdomain.c
In file included from src/core/ngx_core.h:60,
from /root/ngx_upstream_jdomain/src/ngx_http_upstream_jdomain.c:8:
/root/ngx_upstream_jdomain/src/ngx_http_upstream_jdomain.c: In function ‘ngx_http_upstream_set_jdomain_peer_session’:
/root/ngx_upstream_jdomain/src/ngx_http_upstream_jdomain.c:605:42: error: dereferencing pointer to incomplete type ‘SSL_SESSION’ {aka ‘struct ssl_session_st’}
605 | ssl_session ? ssl_session->references : 0);
| ^~
src/core/ngx_log.h:93:48: note: in definition of macro ‘ngx_log_debug’
93 | ngx_log_error_core(NGX_LOG_DEBUG, log, VA_ARGS)
| ^~~~~~~~~~~
/root/ngx_upstream_jdomain/src/ngx_http_upstream_jdomain.c:600:2: note: in expansion of macro ‘ngx_log_debug2’
600 | ngx_log_debug2(NGX_LOG_DEBUG_HTTP,
| ^~~~~~~~~~~~~~
make[1]: *** [nginx-1.18.0/Makefile:1578: nginx-1.18.0/addon/src/ngx_http_upstream_jdomain.o] Error 1
make[1]: Leaving directory '/root/nginx-1.18.0'
make: *** [Makefile:8: build] Error 2


image

Add blocking mode

Depends on #48

Blocking mode? For real? Are you serious? I know, it sounds crazy, but I think it should be an option.

Add a directive attribute blocking which when passed will cause the peer init handler to use ngx_parse_url instead of the configured resolver.

The reasons I think this is important are a bit complicated and come down to DNS resolution stability differences between using differing resolvers during runtime... Using this blocking option would ensure the DNS resolution each interval uses the same resolver (I suppose the system one?) as the one used during initialization.

When using the DOMAIN UPSTREAM module, the VTS Module UPSTREAM counter is all 0.

When using the DOMAIN UPSTREAM module, the VTS Module UPSTREAM counter is all 0.
Can this problem be corrected by adjusting any parameters?

1.nginx_Config
upstream backend {
jdomain x.x.net port=443;
keepalive 300;
}

2.nginx-module-vts status

{
"upstreamZones": {
"https_backend": [
{
"server": "x.x.x.x:443",
"requestCounter": 0,
"inBytes": 0,
"outBytes": 0,
"responses": {
"1xx": 0,
"2xx": 0,
"3xx": 0,
"4xx": 0,
"5xx": 0
},
"requestMsecCounter": 0,
"requestMsec": 0,
"requestMsecs": {
"times": [],
"msecs": []
},
"requestBuckets": {
"msecs": [],
"counters": []
},
"responseMsecCounter": 0,
"responseMsec": 0,
"responseMsecs": {
"times": [],
"msecs": []
},
"responseBuckets": {
"msecs": [],
"counters": []
},
"weight": 1,
"maxFails": 1,
"failTimeout": 10,
"backup": false,
"down": false,
"overCounts": {
"maxIntegerSize": 18446744073709552000,
"requestCounter": 0,
"inBytes": 0,
"outBytes": 0,
"1xx": 0,
"2xx": 0,
"3xx": 0,
"4xx": 0,
"5xx": 0,
"requestMsecCounter": 0,
"responseMsecCounter": 0
}
}
],

Jdomain support sockets?

Hi,
I want to know if jdomain support the headers:

      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "Upgrade";

or my config is wrong

This is my nginx.config:
`
load_module /usr/local/nginx/modules/objs/ngx_http_upstream_jdomain_module.so;
user nginx;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
multi_accept on;
}
http {
client_max_body_size 0;
resolver 127.0.0.11 valid=30s;

server {
    listen 80;
    proxy_buffer_size 128k;
    proxy_buffers 4 256k;
    proxy_busy_buffers_size 256k;
    server_name  http://proxy;
    proxy_connect_timeout 1200s;
    proxy_send_timeout 1200s;
    proxy_read_timeout 1200s;
    fastcgi_send_timeout 1200s;
    fastcgi_read_timeout 1200s;


    location ~ ^/(?!(api/)){
        set $test_arch_archivistica_ui http://test_arch_archivistica_ui:80;
        rewrite /(.*) /$1 break;
        proxy_pass $test_arch_archivistica_ui;
    }

    location /api/notifications_alert{
        rewrite /api/notifications_alert/(.*) /$1 break;
        set $test_notifications_alert test_notifications_alert;
        proxy_pass http://$test_arch_notifications_notifications_alert;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $host;
    }

}

upstream test_notifications_alert{
    server 127.0.0.2 backup;
    jdomain test_arch_notifications strict interval=10  port=3001;
}

  server {
        listen 127.0.0.2:80;
        return 502 'An error.';
      }

}
`
and the way I connect is:

`
var private_socket = io('192.168.0.227/private-alert', {path:"/test_arch/api/notifications_alert/socket.io"} );

   private_socket.on('connect', function(msg) {
       console.log('Usuario ${msg} conectado')
  });

`

the error that marks me is:
WebSocket connection to 'ws://192.168.0.227/test_arch/api/notifications_alert/socket.io/?EIO=3&transport=websocket' failed:

thanks in advance :D

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.