Giter Club home page Giter Club logo

nginx-demos's Introduction

Misc NGINX Demos from conferences showing different functionality of NGINX and NGINX Plus

  • autoscaling-demo: This demo uses one NGINX Plus instance as a load balancer with two upstream groups, one for NGINX Plus web servers and one for Elasticsearch nodes. All of the instances run in Docker containers. The demo uses both the upstream_conf and status api's. If shows creating a new NGINX Plus environment and adding and removing containers manually and with autoscaling.

  • aws-nlb-ha-asg: This demo contains a series of scripts that enable an easy deployment of a High Availability All Active Auto Scaling NGINX Plus Load Balancing configuration on AWS.

  • consul-api-demo: This demo spins up a bunch of docker containers and shows NGINX Plus being used in conjuction with Consul, a service discovery platform. It uses the upstream_conf API in NGINX Plus to add the servers registered with Consul and remove the ones which get deregistered without the need for reloading NGINX Plus. This automates the process of upstream reconfiguration in NGINX Plus based on Consul data using a simple bash script and Consul watches.

  • consul-dns-srv-demo: This demo shows how to use Consul's DNS interface for load balancing with NGINX Plus. It uses the DNS SRV records using the "service" parameter for the server directive of http upstream module and DNS lookups over TCP feature introduced in NGINX Plus R9. This means that NGINX Plus can now ask for the SRV record (port,weight etc) in the DNS query and also switch the DNS query over TCP automatically if it receives a truncated DNS response over UDP.

  • coreos-demo: Shows how to use NGINX Plus to load balance an application running in a CoreOS cluster, utilizing fleet and etcd.

  • etcd-demo: This demo spins up a bunch of docker containers and shows NGINX Plus being used in conjuction with etcd for service discovery. It uses the upstream_conf API in NGINX Plus to add the servers registered with etcd and remove the ones which get deregistered without the need for reloading NGINX Plus. This automates the process of upstream reconfiguration in NGINX Plus based on etcd data using a simple bash script and 'etcdctl exec-watch'.

  • gcp-lb-ha-asg: This demo contains a series of scripts that enable an easy deployment of a High Availability All Active Auto Scaling NGINX Plus Load Balancing configuration on Google Cloud. Adaptation of a guide found here.

  • mysql-galera-demo: This demo uses NGINX Plus as a TCP load balancer for a MySQL Galera cluster consisting of two mysqld servers. It does round-robin load balancing between the 2 mysqld servers and also does active health checks using an xinetd script running on port 9200 inside each mysqld container.

  • nginx-agent-docker: This demo helps building a docker image to deploy NGINX Plus and NGINX Agent for NGINX Management Suite, with optional support for NGINX App Protect WAF and NGINX Developer Portal for API Connectivity Manager

  • nginx-hello: NGINX running as webserver in a docker container that serves a simple page containing the container's hostname, IP address and port

  • nginx-hello-nonroot: NGINX running as webserver with non root privilege in a docker container that serves a simple page containing the container's hostname, IP address and port

  • nginx-nms-docker: This demo helps building a docker image to deploy NGINX Management Suite on containers without Helm. A helper script is provided for Helm deployments

  • nginx-openstack-heat: Shows how to deploy and configure NGINX Plus to load balance a simple web application in OpenStack using Heat. Also the demo shows how NGINX Plus can be reconfigured so that whenever we create or delete our application instances, NGINX Plus is automatically reconfigured.

  • nginx-swarm-demo: Shows how to use NGINX and NGINX Plus in a Docker Swarm, ultilizing the new features of Docker 1.12. Demonstrates doing load balancing with just Docker Swarm, then with NGINX open source and then with NGINX Plus, including autoscaling the backend containers.

  • oauth2-token-introspection-oss: NGINX OAuth 2.0 Token Introspection (with disk caching)

  • oauth2-token-introspection-plus: NGINX Plus OAuth 2.0 Token Introspection (with keyval caching)

  • random-files: Demo to show random content and upstream_conf. Nick to add more description here

  • redis-demo: This demo uses NGINX Plus as a TCP load balancer for a Redis cluster consisting of 3 Redis nodes in Docker. It does Round-robin load balancing between the 3 Redis nodes, leverages the active health checks feature of NGINX Plus and also shows advanced logging using nginScript.

  • zookeeper-demo: This demo spins up a bunch of docker containers and shows NGINX Plus being used in conjuction with Apache Zookeeper for service discovery. It uses the upstream_conf API in NGINX Plus to dynamically add or remove the servers without the need for reloading NGINX Plus. This automates the process of upstream reconfiguration in NGINX Plus based on Zookeeper data using a simple bash script and Zookeeper watches.

  • kubernetes-demo: Shows how to load balance applications on Kubernetes using NGINX and NGINX Plus.

  • mqtt-contiki-demo: Simple MQTT device (mote) for Contiki OS, to demo with Cooji simulator.

Most of the Demos have been configured to utilize Vagrant and Ansible to enable autodeployment.

Prerequisites for Vagrant/Ansible deploymnets

  1. Install Vagrant using the necessary package for your OS:

    https://www.vagrantup.com/downloads.html
    
  2. Install provider for vagrant to use to start VM's.

    The default provider is VirtualBox [Note that only VirtualBox versions 4.0, 4.1, 4.2, 4.3 are supported], which can be downloaded from the following link:
    
    https://www.virtualbox.org/wiki/Downloads
    
    A full list of providers can be found at the following page, if you do not want to use VirtualBox:
    
    https://docs.vagrantup.com/v2/providers/
    
  3. Install Ansible:

    http://docs.ansible.com/ansible/intro_installation.html
    
  4. Clone demo repo

    ```$ git clone [email protected]:nginxinc/NGINX-Demos.git```
    
  5. Copy nginx-repo.key and nginx-repo.crt files for your account to ~/NGINX-Demos/autoscaling-demo/ansible/files/

nginx-demos's People

Contributors

alessfg avatar chen23 avatar cron410 avatar damiancurry avatar dependabot[bot] avatar dnalborczyk avatar fabriziofiorucci avatar kunalvjti avatar lcrilly avatar lucacome avatar magicalyak avatar malanmurphy avatar mathijswesterhof avatar mohamed-gougam avatar nshadrin avatar pleshakov avatar rawdata123 avatar rowi1de avatar sdutta9 avatar tienidurodad avatar tmauro-nginx avatar whisperish avatar wilsonmar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nginx-demos's Issues

ansible galaxy role fails on aws-lb

The galaxy role isn't valid in the requirements.yml for the aws-lb demo.
It references nginxinc.nginx-plus and nginx-oss but there is only nginxinc.ningx
I plan to submit a PR with the appropriate changes for this.

Require image for ARM64 architecture

Hi Team,

I am trying to use the nginxdemos/hello image on arm64 platform but it seems it is not available for arm64.

I have successfully built the image using the command docker build -t image_name . on the arm64 platform without making any changes in the Dockerfile.

I have used Travis-CI to build and push the image for both the platforms.

Commit Link - odidev@83ee65e

Travis-CI link - https://travis-ci.com/github/odidev/NGINX-Demos/builds/233410934

Docker Hub Link - https://hub.docker.com/repository/registry-1.docker.io/odidev/nginxdemos_hello/tags?page=1&ordering=last_updated

Do you have any plans on releasing arm64 images?

It will be very helpful if arm64 image is available. If interested, I will raise a PR.

JWT Token Introspection Request Fails through NGINX Gateway

I'm trying to set up token inspection with keycloak using the instructions here:

https://github.com/nginxinc/NGINX-Demos/tree/master/oauth2-token-introspection-oss

Using NGINX as a gateway to do the token introspection fails with 403 forbidden, however if I send the token introspection request directly to keycloak server it is successful:

Note, I follow two steps here:

a) Request JWT bearer token from keycloak via NGINX gateway.
b) Make an API request via the NGINX api gateway which uses token introspection to authorize the request.

(Included NGINX configuration at the bottom of this post)

  1. Healthy/Successful Introspection Request (directly against introspection endpoint):
curl -k -v \
     -X POST \
     -u "$KC_CLIENT:$KC_CLIENT_SECRET" \
     -d "token=$BEARER" \
     "https://$KC_SERVER:8443/realms/$KC_REALM/protocol/openid-connect/token/introspect"\
     | jq .
  • Result of curl directly to Keycloak introspection endpoint:
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* TLSv1.3 (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
{ [41 bytes data]
* TLSv1.3 (IN), TLS handshake, Certificate (11):
{ [1083 bytes data]
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* TLSv1.3 (IN), TLS handshake, Finished (20):
{ [52 bytes data]
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.3 (OUT), TLS handshake, Finished (20):
} [52 bytes data]
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: O=mkcert development certificate; [email protected]
*  start date: May 29 10:24:48 2023 GMT
*  expire date: Aug 29 10:24:48 2025 GMT
*  issuer: O=mkcert development CA; [email protected]; CN=mkcert [email protected]
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
} [5 bytes data]
* Server auth using Basic with user 'WPPI.UKT'
* Using Stream ID: 1 (easy handle 0x55b331ef2db0)
} [5 bytes data]

> POST /realms/hkjc-api-dev/protocol/openid-connect/token/introspect HTTP/2

> Host: 10.0.0.5:8443
> authorization: Basic V1BQSS5VS1Q6VU5RaE9rYml4bDEzTVRwU2ZvUk5KaUFXanVNOHY2cU0=
> user-agent: curl/7.68.0
> accept: */*
> content-length: 1203
> content-type: application/x-www-form-urlencoded

> 
} [5 bytes data]
* We are completely uploaded and fine
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
{ [50 bytes data]
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
} [5 bytes data]
< HTTP/2 200 
< referrer-policy: no-referrer
< x-frame-options: SAMEORIGIN
< strict-transport-security: max-age=31536000; includeSubDomains
< x-content-type-options: nosniff
< x-xss-protection: 1; mode=block
< content-type: application/json
< content-length: 839
< 
{ [5 bytes data]
^M100  2042  100   839  100  1203  25424  36454 --:--:-- --:--:-- --:--:-- 68066
* Connection #0 to host 10.0.0.5 left intact
{
  "exp": 1685513603,
  "iat": 1685513303,
  "jti": "8653cd7a-d205-4626-93e6-5cb3998ead4e",
  "iss": "https://beta.engeneon.com:8443/realms/hkjc-api-dev",
  "aud": [
    "WPPI.UKT",
    "account"
  ],
  "sub": "c391492d-23d9-4d9f-b99d-d3327299b754",
  "typ": "Bearer",
  "azp": "WPPI.UKT",
  "session_state": "8b638382-e431-4f30-a57f-014d26623c08",
  "preferred_username": "service-account-wppi.ukt",
  "email_verified": false,
  "acr": "1",
  "allowed-origins": [
    "/*"
  ],
  "realm_access": {
    "roles": [
      "default-roles-hkjc-api-dev",
      "offline_access",
      "uma_authorization"
    ]
  },
  "resource_access": {
    "WPPI.UKT": {
      "roles": [
        "uma_protection"
      ]
    },
    "account": {
      "roles": [
        "manage-account",
        "manage-account-links",
        "view-profile"
      ]
    }
  },
  "scope": "profile email txn_gp9",
  "sid": "8b638382-e431-4f30-a57f-014d26623c08",
  "clientHost": "10.0.0.4",
  "clientAddress": "10.0.0.4",
  "client_id": "WPPI.UKT",
  "username": "service-account-wppi.ukt",
  "active": true
}

  1. Failing introspection request (through NGINX API gateway)
  • Curl script:
#!/bin/bash

KC_CLIENT="WPPI.UKT"
KC_CLIENT_SECRET="UNQhOkbixl13MTpSfoRNJiAWjuM8v6qM"
KC_SERVER="10.0.0.5"
KC_CONTEXT="auth"
KC_REALM="hkjc-api-dev"

BEARER=$(curl -k -L -X POST 'https://alpha/auth/realms/hkjc-api-dev/protocol/openid-connect/token'    -H 'Content-Type: application/x-www-form-urlencoded'    --data-urlencode 'client_id=WPPI.UKT'    --data-urlencode 'grant_type=client_credentials'    --data-urlencode 'client_secret=UNQhOkbixl13MTpSfoRNJiAWjuM8v6qM'    --data-urlencode 'scope=txn_gp9'| jq -r  | jq -r '.access_token')

curl -k -v \
     -X POST \
     -u "$KC_CLIENT:$KC_CLIENT_SECRET" \
     -d "token=$BEARER" \
     "https://alpha/" | jq -r .

Curl-side debug:

* TCP_NODELAY set
* Connected to alpha (10.0.0.4) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* TLSv1.3 (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
{ [19 bytes data]
* TLSv1.3 (IN), TLS handshake, Certificate (11):
{ [942 bytes data]
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* TLSv1.3 (IN), TLS handshake, Finished (20):
{ [52 bytes data]
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.3 (OUT), TLS handshake, Finished (20):
} [52 bytes data]
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=SG; ST=Changi; L=Singapore; O=Engeneon; OU=Division; CN=Alpha; [email protected]
*  start date: May 18 16:42:48 2023 GMT
*  expire date: May 17 16:42:48 2024 GMT
*  issuer: C=SG; ST=Changi; L=Singapore; O=Engeneon; OU=Division; CN=Alpha; [email protected]
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
} [5 bytes data]
* Server auth using Basic with user 'WPPI.UKT'
* Using Stream ID: 1 (easy handle 0x55e19f784db0)
} [5 bytes data]

* Server auth using Basic with user 'WPPI.UKT'
* Using Stream ID: 1 (easy handle 0x55e19f784db0)
} [5 bytes data]
> POST / HTTP/2
> Host: alpha
> authorization: Basic V1BQSS5VS1Q6VU5RaE9rYml4bDEzTVRwU2ZvUk5KaUFXanVNOHY2cU0=
> user-agent: curl/7.68.0
> accept: */*
> content-length: 1203
> content-type: application/x-www-form-urlencoded
> 
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
{ [265 bytes data]
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
{ [249 bytes data]
* old SSL session ID is stale, removing
{ [5 bytes data]
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
} [5 bytes data]
* We are completely uploaded and fine
{ [5 bytes data]

< HTTP/2 403 
< server: nginx/1.24.0
< date: Wed, 31 May 2023 06:14:42 GMT
< content-type: text/html
< content-length: 153
< 
{ [153 bytes data]
^M100  1356  100   153  100  1203   4371  34371 --:--:-- --:--:-- --:--:-- 38742
* Connection #0 to host alpha left intact
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.24.0</center>
</body>
</html>

  • Keycloak server logs indicate the client wan't found clientId=null, userId=null, ipAddress=10.0.0.4, error=client_not_found, though in then nginx logs it shows the client credentials are correctly converted to base64:
2023-05-31 06:23:02,540 WARN  [org.keycloak.events] (executor-thread-21) type=INTROSPECT_TOKEN_ERROR, realmId=84d8e944-8143-4cc7-8dcd-128e2ec0ebfb, clientId=null, userId=null, ipAddress=10.0.0.4, error=client_not_found
2023-05-31 06:23:02,540 WARN  [org.keycloak.events] (executor-thread-21) type=INTROSPECT_TOKEN_ERROR, realmId=84d8e944-8143-4cc7-8dcd-128e2ec0ebfb, clientId=null, userId=null, ipAddress=10.0.0.4, error=invalid_request, detail='Authentication failed.'

NGINX API gateway logs:

  • Got bearer token from KC via Nginx api gw:
==> /var/log/nginx/access.log <==
10.0.0.6 - - [31/May/2023:06:26:37 +0000] "POST /auth/realms/hkjc-api-dev/protocol/openid-connect/token HTTP/2.0" 200 2109 "-" "curl/7.68.0" "-"
  • Token introspection fails:
==> /var/log/nginx/error.log <==
2023/05/31 06:26:37 [info] 10712#10712: *30 js: DEBUG: BEFORE: OAuth sending introspection request with token: Basic V1BQSS5VS1Q6VU5RaE9rYml4bDEzTVRwU2ZvUk5KaUFXanVNOHY2cU0=

??? -> 2023/05/31 06:26:37 [info] 10712#10712: *30 js: OAuth Got AuthHeader:  Basic my-client-id:my-client-secret

2023/05/31 06:26:37 [info] 10712#10712: *30 js: DEBUG: AFTER: OAuth sending introspection request with token: Basic V1BQSS5VS1Q6VU5RaE9rYml4bDEzTVRwU2ZvUk5KaUFXanVNOHY2cU0=
2023/05/31 06:26:37 [info] 10712#10712: *30 js: OAuth sending introspection request with token: Basic V1BQSS5VS1Q6VU5RaE9rYml4bDEzTVRwU2ZvUk5KaUFXanVNOHY2cU0=
2023/05/31 06:26:37 [error] 10712#10712: *30 js: OAuth unexpected response from authorization server (HTTP 401). undefined
2023/05/31 06:26:37 [info] 10712#10712: *30 js: OAuth token introspection response: {"error":"invalid_request","error_description":"Authentication failed."}
2023/05/31 06:26:37 [warn] 10712#10712: *30 js: OAuth token introspection found inactive token
  • Final access log entry ("403")
==> /var/log/nginx/access.log <==
10.0.0.6 - WPPI.UKT [31/May/2023:06:26:37 +0000] "POST / HTTP/2.0" 403 153 "-" "curl/7.68.0" "-"

Nginx.conf:

js_import scripts/oauth2.js;

map $http_authorization $access_token {
    "~*^Bearer (.*)$" $1;
    default $http_authorization;
}

#OAuth 2.0 Token Introspection configuration
#proxy_cache_path /var/cache/nginx/tokens levels=1 keys_zone=token_responses:1m max_size=10m;
#resolver 8.8.8.8;                  # For DNS lookup of OAuth server
subrequest_output_buffer_size 16k; # To fit a complete response from OAuth server

server {

    listen              443 ssl http2;

    server_name         alpha.engeneon.com;
    ssl_certificate     alpha.engeneon.com.crt;
    ssl_certificate_key alpha.engeneon.com.key;
    ssl_protocols       TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
    ssl_ciphers         HIGH:!aNULL:!MD5;

    #set $access_token $http_apikey; # Where to find the token. Remove when using Authorization header
    #e.g "https://$KC_SERVER:8443/realms/$KC_REALM/protocol/openid-connect/token/introspect"

    set $oauth_token_endpoint     "https://10.0.0.5:8443/realms/hkjc-api-dev/protocol/openid-connect/token/introspect";
    set $oauth_token_hint         "access_token"; # E.g. access_token, refresh_token
    set $oauth_client_id          "my-client-id"; # Will use HTTP Basic authentication unless empty
    set $oauth_client_secret      "my-client-secret"; # If id is empty this will be used as a bearer token

    proxy_set_header X-Forwarded-For $proxy_protocol_addr;          # To forward the original client's IP address 
    proxy_set_header X-Forwarded-Proto $scheme;                     # to forward the  original protocol (HTTP or HTTPS)

    #Client Step #1: First get a JWT
    location /auth/ {
      proxy_pass https://10.0.0.5:8443/;
    }

    location / {
        auth_request /_oauth2_token_introspection;

        # Any member of the token introspection response is available as $sent_http_token_member
        #auth_request_set $username $sent_http_token_username;
        #proxy_set_header X-Username $username;

        #pass through to API endpoint once the JWT has been authorized by introspection
        proxy_pass http://10.0.0.7;
    }

    location = /_oauth2_token_introspection {
        # This location implements an auth_request server that uses the JavaScript
        # module to perform the token introspection request.
        internal;
        js_content oauth2.introspectAccessToken;
    }

    location = /_oauth2_send_introspection_request {
        # This location is called by introspectAccessToken(). We use the proxy_
        # directives to construct an OAuth 2.0 token introspection request, as per:
        #  https://tools.ietf.org/html/rfc7662#section-2
        internal;
        gunzip on; # Decompress if necessary

        proxy_method      POST;
        proxy_set_header  Authorization $arg_authorization;
        proxy_set_header  Content-Type "application/x-www-form-urlencoded";
        proxy_set_body    "token=$arg_token&token_hint=$oauth_token_hint";
        proxy_pass        $oauth_token_endpoint;

    }

}



ERROR: Service 'nginxplus' failed to build: lstat nginx-repo.crt: no such file or directory

Hi Kunal,

I'm trying execute the demo consul-dns-srv-demo and Its very nice post and followed in manul installation section to excute the demo, and executed all the steps from 1 and strucked in step 6 and got error while building nginxplus image, I've attached screen shot. So I when I ran command "docker ps" to know the images which are up and running, I didn't any list like you showed in Running Demo screen.

BTW, I'm using Ubuntu 14.04 and installed docker-tools & docker-compose.

Am I following right way to execute the demo?
Can you please help me to execute the demo

ngnixerror

nginx-regex-tester changed my configuration from regextester.conf

I wrote to regextester.conf next settings

server {
    listen 9000;
    location / {
        return 200 "Match not found\n";
    }
    location ~ "^/([a-zA-Z]{2})/hq/" {
        return 200 "Match found  Capture Group 1: $1\n";
    }
}

after deploy I get nginx error and into docker container I see that my srttings was changed to

server {
    listen 9000;
    location / {
        return 200 "Match not found\n";
    }
    location ~* ^/([a-zA-Z]{2})/hq/ {
        return 200 "Match found  Capture Group 1: $1\n";
    }
}

OK, I returnet config into conteiner againe, then do nginx -s reload and again get wrong config with

 location ~* ^/([a-zA-Z]{2})/hq/

Why?

nginx-regex-tester Build Error

Hi.
Got error while building.

docker-compose up -d
[+] Building 1.7s (10/18)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/nginx:latest 0.9s
=> [internal] load build context 0.0s
=> => transferring context: 135B 0.0s
=> [ 1/14] FROM docker.io/library/nginx:latest@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee 0.0s
=> CACHED [ 2/14] COPY start.sh /usr/local/sbin 0.0s
=> CACHED [ 3/14] RUN chmod +x /usr/local/sbin/start.sh 0.0s
=> CACHED [ 4/14] RUN apt-get update && apt-get install -y -q wget curl apt-transport-https lsb-release ca-certificates gnupg 0.0s
=> CACHED [ 5/14] RUN wget -q -O - http://nginx.org/keys/nginx_signing.key | gpg --dearmor > /usr/share/keyrings/nginx-archive-key 0.0s
=> ERROR [ 6/14] RUN wget -q -O - https://unit.nginx.org/keys/nginx-keyring.gpg | gpg --dearmor > /usr/share/keyrings/nginx-keyrin 0.7s

[ 6/14] RUN wget -q -O - https://unit.nginx.org/keys/nginx-keyring.gpg | gpg --dearmor > /usr/share/keyrings/nginx-keyring.gpg:
#0 0.663 gpg: no valid OpenPGP data found.


failed to solve: executor failed running [/bin/sh -c wget -q -O - https://unit.nginx.org/keys/nginx-keyring.gpg | gpg --dearmor > /usr/share/keyrings/nginx-keyring.gpg]: exit code: 2

aws-lb demo doesn't update the certs

Error occurs using the aws-nlb demo. No matter where the certs are they don't copy or aren't found. The step that copies them seems fine

ie
==> ngx-plus: Uploading /Users/tgamull/.ssh/ngx-certs => ~/.ssh/certs

Then the error

ngx-plus: TASK [nginxinc.nginx : (All OSs) Copy NGINX Plus Certificate and License Key] ***
ngx-plus: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
ngx-plus: failed: [127.0.0.1] (item=license/nginx-repo.crt) => {"changed": false, "item": "license/nginx-repo.crt", "msg": "Could not find or access 'license/nginx-repo.crt'\nSearched in:\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/roles/nginxinc.nginx/files/license/nginx-repo.crt\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/roles/nginxinc.nginx/license/nginx-repo.crt\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/roles/nginxinc.nginx/tasks/plus/files/license/nginx-repo.crt\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/roles/nginxinc.nginx/tasks/plus/license/nginx-repo.crt\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/files/license/nginx-repo.crt\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/license/nginx-repo.crt on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
ngx-plus: An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
ngx-plus: failed: [127.0.0.1] (item=license/nginx-repo.key) => {"changed": false, "item": "license/nginx-repo.key", "msg": "Could not find or access 'license/nginx-repo.key'\nSearched in:\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/roles/nginxinc.nginx/files/license/nginx-repo.key\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/roles/nginxinc.nginx/license/nginx-repo.key\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/roles/nginxinc.nginx/tasks/plus/files/license/nginx-repo.key\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/roles/nginxinc.nginx/tasks/plus/license/nginx-repo.key\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/files/license/nginx-repo.key\n\t/tmp/packer-provisioner-ansible-local/5c6dbc5b-e3b9-8467-4a10-31349bf300e4/license/nginx-repo.key on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}

Facing Issues with Load Balancing using NGINX Load Balancer on AWS EKS

I am deploying a triton inference server on the Amazon Elastic Kubernetes Service (Amazon EKS) and using Nginx Open-Source Load Balancer for load-balancing. Our EKS Cluster is private (EKS Nodes are in private subnets) so that no one can access it from the outside world.

Since, triton inference server has three endpoints:-
port 8000: for HTTP requests
port 8001: for grpc requests
port 8002: Prometheus metrics server

First of all, I have created a deployment for Triton on AWS EKS and exposed it using clusterIP = None, so that all the replicas endpoints are exposed and identified by NGINX Load Balancer.

apiVersion: v1
kind: Service
metadata:
  name: triton
  labels:
    app: triton
spec:
  clusterIP: None
  ports:
     - protocol: TCP
       port: 8000
       name: http
       targetPort: 8000
     - protocol: TCP
       port: 8001
       name: grpc
       targetPort: 8001
     - protocol: TCP
       port: 8002
       name: metrics
       targetPort: 8002
  selector:
    app: triton

Then, I have created a image for nginx opensource load balancer using the below configuration.
Configuration file for NGINX on EKS node at the location /etc/nginx/conf.d/nginx.conf.

resolver kube-dns.kube-system.svc.cluster.local valid=5s;
upstream backend {
   zone upstream-backend 64k;
   server triton.default.svc.cluster.local:8000;
}
 
upstream backendgrpc {
   zone upstream-backend 64k;
   server triton.default.svc.cluster.local:8001;
}
 
server {
   listen 80;
   location / {
     proxy_pass http://backend/;
   }
}
 
server {
        listen 89 http2;
 
        location / {
            grpc_pass grpc://backendgrpc;
        }
}
 
server {
    listen 8080;
    root /usr/share/nginx/html;
    location = /dashboard.html { }
    location = / {
       return 302 /dashboard.html;
    }
} 

Dockerfile for Nginx Opensource LB is:-

FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY /etc/nginx/conf.d/nginx.conf /etc/nginx/conf.d/default.conf

I have created a ReplicationController for NGINX. To pull the image from the private registry, Kubernetes needs credentials.
The imagePullSecrets field in the configuration file specifies that Kubernetes should get the credentials from a Secret named ecr-cred.

The nginx-rc file looks like:-

 apiVersion: v1
 kind: ReplicationController
 metadata:
   name: nginx-rc
 spec:
   replicas: 1
   selector:
     app: nginx
   template:
     metadata:
       labels:
         app: nginx
     spec:
       imagePullSecrets:
       - name: ecr-cred
       containers:
       - name: nginx
         command: [ "/bin/bash", "-c", "--" ]
         args: [ "nginx; while true; do sleep 30; done;" ]
         imagePullPolicy: IfNotPresent
         image: <Image URL with tag>
         ports:
           - name: http
             containerPort: 80
             hostPort: 8085
           - name: grpc
             containerPort: 89
             hostPort: 8087
           - name: http-alt
             containerPort: 8080
             hostPort: 8086
           - name: triton-svc
             containerPort: 8000
             hostPort: 32309

Now, the issue which I am facing is, when the pods are increasing, the nginx load balancer is not doing the load balancing between those newly added pods.

Can anyone help me?

nginxdemos/hello crashes on Raspberry Pi 3 (using 64bit OS)

nginxdemos/hello runs fine on my local machine but crashes on Raspberry Pi 3 (using 64bit OS).

[Logs]    [6/24/2020, 11:52:35 AM] [web] standard_init_linux.go:207: exec user process caused "exec format error"

In my docker-compose file I have the following specifications:

  web:
    image: nginxdemos/hello:latest
    expose:
      - "80"

The image specifies that it is compatible with linux/amd64

I'm deploying via BalenaOS if that's any help and my entire app source is available to view here, but the lines above are all that are relevant.

Nginx regex-tester docker does not build due to missing gpg key

...
 => [ 9/13] RUN printf "deb-src https://packages.nginx.org/unit/debian/ `  0.6s
 => ERROR [10/13] RUN apt-get update && apt-get install -y unit php7.0 un  1.7s
------
 > [10/13] RUN apt-get update && apt-get install -y unit php7.0 unit-php:
#0 0.605 Hit:1 http://deb.debian.org/debian bullseye InRelease
#0 0.605 Hit:2 http://deb.debian.org/debian-security bullseye-security InRelease
#0 0.605 Hit:3 http://deb.debian.org/debian bullseye-updates InRelease
#0 0.782 Get:4 https://packages.nginx.org/unit/debian bullseye InRelease [2815 B]
#0 1.137 Err:4 https://packages.nginx.org/unit/debian bullseye InRelease
#0 1.137   The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62
#0 1.143 Reading package lists...
#0 1.701 W: GPG error: https://packages.nginx.org/unit/debian bullseye InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62
#0 1.701 E: The repository 'https://packages.nginx.org/unit/debian bullseye InRelease' is not signed.
------
failed to solve: executor failed running [/bin/sh -c apt-get update && apt-get install -y unit php7.0 unit-php]: exit code: 100

When pulling master and running the docker-compose up -d command the build tries to pull a repository that does not have the correct key.
this can be fixed by adding the key

RUN wget -q -O - https://unit.nginx.org/keys/nginx-keyring.gpg | gpg --dearmor > /usr/share/keyrings/nginx-keyring.gpg

and modifying the print statements to

RUN printf "deb [signed-by=/usr/share/keyrings/nginx-keyring.gpg] https://packages.nginx.org/unit/debian/ `lsb_release -cs` unit" > /etc/apt/sources.list.d/unit.list
RUN printf "deb-src [signed-by=/usr/share/keyrings/nginx-keyring.gpg] https://packages.nginx.org/unit/debian/ `lsb_release -cs` unit" >> /etc/apt/sources.list.d/unit.list

Docker build fails due to the bookwork package not having a release file

➜  nginx-regex-tester git:(master) docker-compose up -d
[+] Building 5.3s (14/17)
 => [internal] load build definition from Dockerfile                                                                                                                              0.0s
 => => transferring dockerfile: 32B                                                                                                                                               0.0s
 => [internal] load .dockerignore                                                                                                                                                 0.0s
 => => transferring context: 2B                                                                                                                                                   0.0s
 => [internal] load metadata for docker.io/library/nginx:latest                                                                                                                   2.4s
 => [internal] load build context                                                                                                                                                 0.0s
 => => transferring context: 135B                                                                                                                                                 0.0s
 => [ 1/13] FROM docker.io/library/nginx:latest@sha256:593dac25b7733ffb7afe1a72649a43e574778bf025ad60514ef40f6b5d606247                                                           0.0s
 => CACHED [ 2/13] COPY start.sh /usr/local/sbin                                                                                                                                  0.0s
 => CACHED [ 3/13] RUN chmod +x /usr/local/sbin/start.sh                                                                                                                          0.0s
 => CACHED [ 4/13] RUN apt-get update && apt-get install -y -q wget curl apt-transport-https lsb-release ca-certificates gnupg                                                    0.0s
 => CACHED [ 5/13] RUN wget -q -O - http://nginx.org/keys/nginx_signing.key | gpg --dearmor > /usr/share/keyrings/nginx-archive-keyring.gpg                                       0.0s
 => CACHED [ 6/13] RUN rm /etc/nginx/conf.d/*                                                                                                                                     0.0s
 => CACHED [ 7/13] COPY regextester.conf /etc/nginx/conf.d                                                                                                                        0.0s
 => CACHED [ 8/13] RUN printf "deb https://packages.nginx.org/unit/debian/ `lsb_release -cs` unit" > /etc/apt/sources.list.d/unit.list                                            0.0s
 => CACHED [ 9/13] RUN printf "deb-src https://packages.nginx.org/unit/debian/ `lsb_release -cs` unit" >> /etc/apt/sources.list.d/unit.list                                       0.0s
 => ERROR [10/13] RUN apt-get update && apt-get install -y unit php7.0 unit-php                                                                                                   2.7s
------
 > [10/13] RUN apt-get update && apt-get install -y unit php7.0 unit-php:
#0 0.731 Hit:1 http://deb.debian.org/debian bookworm InRelease
#0 0.751 Hit:2 http://deb.debian.org/debian bookworm-updates InRelease
#0 0.785 Hit:3 http://deb.debian.org/debian-security bookworm-security InRelease
#0 1.404 Ign:4 https://packages.nginx.org/unit/debian bookworm InRelease
#0 1.581 Err:5 https://packages.nginx.org/unit/debian bookworm Release
#0 1.581   404  Not Found [IP: 3.125.197.172 443]
#0 1.597 Reading package lists...
#0 2.631 E: The repository 'https://packages.nginx.org/unit/debian bookworm Release' does not have a Release file.
------
failed to solve: executor failed running [/bin/sh -c apt-get update && apt-get install -y unit php7.0 unit-php]: exit code: 100
➜  nginx-regex-tester git:(master) ls

I'm getting this error when I try launch the local server via docker locally using docker-compose up -d command.

Cybersecurity vulnerability in freetype:2.7-r1

This image includes Freetype 2.7-r1 containing the following vulnerabilities:

FreeType 2 before 2017-03-26 has an out-of-bounds write caused by a heap-based buffer overflow related to the t1_builder_close_contour function in psaux/psobjs.c.
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-8287

FreeType 2 before 2017-03-24 has an out-of-bounds write caused by a heap-based buffer overflow related to the t1_decoder_parse_charstrings function in psaux/t1decode.c.
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-8105

Not able to :

Hi

  1. The registry or the image for automated process of the registry is not available (need to update the README.md
  2. the copy past does not work, (need to fix the typo in the script name)
  3. App Protect 4.2.0 is not available (only 4.1.0)
  4. Security Monitory is not bundled in App Protect WAF or App Protect DOS or App NAP, I don't know where can I find it
  5. Manual Script does not work

Regards,

nginx-regex-tester broken running on Windows due to line endings

Steps to reproduce:

  1. Clone the repository
  2. Run a docker-compose up

The container will fail to start with the follow error:

docker-compose.exe logs -f
Attaching to regextester
regextester    | /docker-entrypoint.sh: 38: exec: /usr/local/sbin/start.sh: not found
regextester exited with code 127

From running a shell within the container:

root@28e3903efd0b:/usr/local/sbin# ls
start.sh
root@28e3903efd0b:/usr/local/sbin# ./start.sh
bash: ./start.sh: /bin/sh^M: bad interpreter: No such file or directory

I have worked around this by setting LF line endings for the start.sh file and ensuring the path is shared via Docker for Windows.

nginx-regex-tester bugs: Undefined index: frmCaseSensitive & readme typo

For https://github.com/nginxinc/NGINX-Demos/tree/master/nginx-regex-tester, the docker container's logs report following error

docker logs regextester

2019/06/12 03:23:26 [notice] 17#17 php message: PHP Notice:  Undefined index: frmCaseSensitive in /usr/share/nginx/html/regextester.php on line 97

also readme has typo missing hyphen

typo

Setup
From /nginx-regextester: # docker-compose up -d

should be directory at nginx-regex-tester

Setup
From /nginx-regex-tester: # docker-compose up -d

Regex tester unitd error during build

Attaching to regextester
regextester  | 2023/06/30 09:29:23 [notice] 7#7: using the "epoll" event method
regextester  | 2023/06/30 09:29:23 [notice] 7#7: nginx/1.25.1
regextester  | 2023/06/30 09:29:23 [notice] 7#7: built by gcc 12.2.0 (Debian 12.2.0-14)
regextester  | 2023/06/30 09:29:23 [notice] 7#7: OS: Linux 5.10.102.1-microsoft-standard-WSL2
regextester  | 2023/06/30 09:29:23 [notice] 7#7: getrlimit(RLIMIT_NOFILE): 1048576:1048576
regextester  | 2023/06/30 09:29:23 [notice] 8#8: start worker processes
regextester  | 2023/06/30 09:29:23 [notice] 8#8: start worker process 9
regextester  | 2023/06/30 09:29:23 [notice] 8#8: start worker process 11
regextester  | 2023/06/30 09:29:23 [notice] 8#8: start worker process 12
regextester  | 2023/06/30 09:29:23 [notice] 8#8: start worker process 13
regextester  | 127.0.0.1 - - [30/Jun/2023:09:29:23 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.88.1" "-"
regextester  | NGINX started
regextester  | unknown option "--modules", try "unitd -h" for available options
regextester  | Unit failed to start: Status
regextester exited with code 1

oauth2-token-introspection-oss - problem with long tokens

Tested the example configuration and Nginx seems to cut off the end of the token sent for introspection to an OAuth server.

I enabled debug logging and can see that the JS script is calling /_oauth2_send_introspection_request with the full token in place. But when the request is sent to the OAuth server the content lenght is trimmed to 1263 characters instead of 1660 characters in the token.

Failures due to js engine 0.8.0

I try to implement the code from the oauth2-token-introspection-oss example, but the js code in the example has some failures since js engine 0.8.0.

line 22: <>.toBytes() does not exist anymore
line 38 and 39: reply.responseBody does not exist anymore

Can this be fixed (maybe more scripts have issues in this repo)

consul-api-demo works incorrectly

The script.sh doesn't work as expected. When servers no longer exist in Consul they can't be removed from the upstream of NGINX.
I modified the script to the below the problem was solved.

#!/bin/bash
if [[ -z "$HOST_IP" ]]; then
  echo "HOST_IP not set in consul container. Setting it to 10.1.10.227 (IP address assigned in the Vagrantfile)"
  HOST_IP=10.1.10.227
fi

CURL='/usr/bin/curl'
OPTIONS='-s'
CONSUL_SERVICES_API="http://$HOST_IP:8500/v1/catalog/services"
CONSUL_SERVICE_API="http://$HOST_IP:8500/v1/catalog/service"
STATUS_UPSTREAMS_API="http://$HOST_IP:8080/api/3/http/upstreams"

# Get the list of current NGINX upstreams
upstreams=$($CURL $OPTIONS $STATUS_UPSTREAMS_API | jq -r '. as $in | keys[]')
servers=$($CURL $OPTIONS ${STATUS_UPSTREAMS_API}/${upstreams}/servers)
echo "NGINX upstreams in $upstreams:"
echo $servers

# Loop through the registered servers in consul tagged with production (i.e backend servers to be proxied through nginx) and add the ones not present in the Nginx upstream block
service=$($CURL $OPTIONS $CONSUL_SERVICES_API | jq --raw-output 'to_entries | .[] | select(.value[0] == "production") | .key')
echo "Servers registered with consul:"
echo $service

ports=$($CURL $OPTIONS $CONSUL_SERVICE_API/$service | jq -r '.[] | .ServicePort')
for port in ${ports[@]}; do
  entry=$HOST_IP:$port
  if [[ ! $servers =~ $entry ]]; then
    $CURL -X POST -d '{"server": "'$entry'"}' $OPTIONS "${STATUS_UPSTREAMS_API}/${upstreams}/servers"
    echo "Added $entry to the NGINX upstream group $upstreams!"
  fi
done

# Loop through the NGINX upstreams and remove the ones not present in consul
servers=($($CURL $OPTIONS ${STATUS_UPSTREAMS_API}/${upstreams}/servers | jq  -c '.[]'))
for params in ${servers[@]}; do
# Here is the block I modified!
  if [[ $params =~ "server" ]]; then
    server=$(echo $params | jq '.server')
    id=$(echo $params | jq '.id')
# Modification end!
  else
    continue
  fi

  service=$($CURL $OPTIONS $CONSUL_SERVICES_API | jq --raw-output 'to_entries| .[] | select(.value[0] == "production") | .key')
  ports=$($CURL $OPTIONS $CONSUL_SERVICE_API/$service | jq -r '.[]|.ServicePort')
  found=0
  for port in ${ports[@]}; do
    entry=$HOST_IP:$port
    if [[ $server =~ $entry ]]; then
      echo "$server matches consul entry $entry"
      found=1
      break
    else
      continue
    fi
  done

  if [ $found -eq 0 ]; then
    $CURL -X DELETE $OPTIONS "{$STATUS_UPSTREAMS_API}/$upstreams/servers/$id"
    echo "Removed $server # $id from nginx upstream block $upstreams!"
  fi
done

Unable to build docker container for nginx-regex-tester

Hello,

I have pulled the latest master of the repo and am following the setup instructions from the README for the nginx-regex-tester demo. However, I am encountering an error when running $ docker-compose up -d:

$ docker-compose up -d
[+] Building 4.1s (15/18)                                                                                                                                                                                                
 => [internal] load build definition from Dockerfile                                                                                                                                                                0.0s
 => => transferring dockerfile: 32B                                                                                                                                                                                 0.0s
 => [internal] load .dockerignore                                                                                                                                                                                   0.0s
 => => transferring context: 2B                                                                                                                                                                                     0.0s
 => [internal] load metadata for docker.io/library/nginx:latest                                                                                                                                                     2.0s
 => [auth] library/nginx:pull token for registry-1.docker.io                                                                                                                                                        0.0s
 => [internal] load build context                                                                                                                                                                                   0.0s
 => => transferring context: 135B                                                                                                                                                                                   0.0s
 => [ 1/13] FROM docker.io/library/nginx:latest@sha256:2ab30d6ac53580a6db8b657abf0f68d75360ff5cc1670a85acb5bd85ba1b19c0                                                                                             0.0s
 => CACHED [ 2/13] COPY start.sh /usr/local/sbin                                                                                                                                                                    0.0s
 => CACHED [ 3/13] RUN chmod +x /usr/local/sbin/start.sh                                                                                                                                                            0.0s
 => CACHED [ 4/13] RUN apt-get update && apt-get install -y -q wget curl apt-transport-https lsb-release ca-certificates gnupg                                                                                      0.0s
 => CACHED [ 5/13] RUN wget -q -O - http://nginx.org/keys/nginx_signing.key | gpg --dearmor > /usr/share/keyrings/nginx-archive-keyring.gpg                                                                         0.0s
 => CACHED [ 6/13] RUN rm /etc/nginx/conf.d/*                                                                                                                                                                       0.0s
 => CACHED [ 7/13] COPY regextester.conf /etc/nginx/conf.d                                                                                                                                                          0.0s
 => CACHED [ 8/13] RUN printf "deb https://packages.nginx.org/unit/debian/ `lsb_release -cs` unit" > /etc/apt/sources.list.d/unit.list                                                                              0.0s
 => CACHED [ 9/13] RUN printf "deb-src https://packages.nginx.org/unit/debian/ `lsb_release -cs` unit" >> /etc/apt/sources.list.d/unit.list                                                                         0.0s
 => ERROR [10/13] RUN apt-get update && apt-get install -y unit php7.0 unit-php                                                                                                                                     1.9s
------                                                                                                                                                                                                                   
 > [10/13] RUN apt-get update && apt-get install -y unit php7.0 unit-php:                                                                                                                                                
#0 0.363 Hit:1 http://deb.debian.org/debian bullseye InRelease                                                                                                                                                           
#0 0.364 Hit:2 http://deb.debian.org/debian-security bullseye-security InRelease                                                                                                                                         
#0 0.381 Hit:3 http://deb.debian.org/debian bullseye-updates InRelease                                                                                                                                                   
#0 1.055 Get:4 https://packages.nginx.org/unit/debian bullseye InRelease [2815 B]                                                                                                                                        
#0 1.193 Err:4 https://packages.nginx.org/unit/debian bullseye InRelease
#0 1.193   The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62
#0 1.201 Reading package lists...
#0 1.918 W: GPG error: https://packages.nginx.org/unit/debian bullseye InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62
#0 1.918 E: The repository 'https://packages.nginx.org/unit/debian bullseye InRelease' is not signed.
------
failed to solve: executor failed running [/bin/sh -c apt-get update && apt-get install -y unit php7.0 unit-php]: exit code: 100

Any ideas on what the issue could be here?

nginx-regex-tester does not compile with docker-compose

I'm trying to build nginx-regex-tester but it does not build

WARNING: Image for service regextester was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating regextester ... 

ERROR: for regextester  a bytes-like object is required, not 'str'

ERROR: for regextester  a bytes-like object is required, not 'str'
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.21/containers/07e0ff960cf5ec624fd607fa560ca69d2882911aa6b46c60149528d3328fd5a7/start

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/compose/service.py", line 625, in start_container
    container.start()
  File "/usr/lib/python3/dist-packages/compose/container.py", line 241, in start
    return self.client.start(self.id, **options)
  File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 19, in wrapped
    return f(self, resource_id, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/docker/api/container.py", line 1095, in start
    self._raise_for_status(res)
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/lib/python3/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("b'driver failed programming external connectivity on endpoint regextester (d441b27e8a6e4807896c03be1998af92359013b35568ebe99d6aee0d5f284e92): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use'")

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 11, in <module>
    load_entry_point('docker-compose==1.25.0', 'console_scripts', 'docker-compose')()
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 72, in main
    command()
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 128, in perform_command
    handler(command, command_options)
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1107, in up
    to_attach = up(False)
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1088, in up
    return self.project.up(
  File "/usr/lib/python3/dist-packages/compose/project.py", line 565, in up
    results, errors = parallel.parallel_execute(
  File "/usr/lib/python3/dist-packages/compose/parallel.py", line 112, in parallel_execute
    raise error_to_reraise
  File "/usr/lib/python3/dist-packages/compose/parallel.py", line 210, in producer
    result = func(obj)
  File "/usr/lib/python3/dist-packages/compose/project.py", line 548, in do
    return service.execute_convergence_plan(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 545, in execute_convergence_plan
    return self._execute_convergence_create(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 460, in _execute_convergence_create
    containers, errors = parallel_execute(
  File "/usr/lib/python3/dist-packages/compose/parallel.py", line 112, in parallel_execute
    raise error_to_reraise
  File "/usr/lib/python3/dist-packages/compose/parallel.py", line 210, in producer
    result = func(obj)
  File "/usr/lib/python3/dist-packages/compose/service.py", line 465, in <lambda>
    lambda service_name: create_and_start(self, service_name.number),
  File "/usr/lib/python3/dist-packages/compose/service.py", line 457, in create_and_start
    self.start_container(container)
  File "/usr/lib/python3/dist-packages/compose/service.py", line 627, in start_container
    if "driver failed programming external connectivity" in ex.explanation:
TypeError: a bytes-like object is required, not 'str'

Regex texter build error

 > [regextester 10/13] RUN apt-get update && apt-get install -y unit php7.0 unit-php:
0.566 Hit:1 http://deb.debian.org/debian bookworm InRelease
0.575 Hit:2 http://deb.debian.org/debian bookworm-updates InRelease
0.601 Hit:3 http://deb.debian.org/debian-security bookworm-security InRelease
0.677 Get:4 https://packages.nginx.org/unit/debian bookworm InRelease [2803 B]
0.790 Err:4 https://packages.nginx.org/unit/debian bookworm InRelease
0.790   The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62
0.794 Reading package lists...
1.281 W: GPG error: https://packages.nginx.org/unit/debian bookworm InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62
1.281 E: The repository 'https://packages.nginx.org/unit/debian bookworm InRelease' is not signed.

Clarification about TLS 1.2/1.3 with windows Registry settings

Dear team,

We are using nginx-1.22.0 in WIndows 2019 server. In nginx configuration we have enabled to use TLS 1.2 and TLS 1.3, Now we need clarification , In windows registry there is a option to enable/disable specific TLS/SSL version for both client as well server, does these setting will affect the protocol that enabled in nginx? how it is working?

NGINX config:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_prefer_server_ciphers on;

Windows Registry Setting:

image

health_check does not use local context when performing checks

Why isn't health check able to use the context within location block?

server {
  listen 8080;
  server_name localhost;
  
  set $proxy_host $host;
  proxy_set_header Host $proxy_host;

  location /api/notworking {
    set $proxy_host "override-api-not-working.host.com";
    proxy_pass http://localhost:8000; # proxy_pass gets correct "Host: override-api-not-working.host.com"

    # !!! health_check does not get correct host!!!
    health_check       uri=/__health?from=nginx        passes=3        interval=5s        fails=3;
  }

  location /api/working {
    proxy_set_header Host "override-api-working.host.com";
    proxy_pass http://localhost:8000; # proxy_pass gets correct "Host: override-api-working.host.com"

    # health_check gets correct "Host: override-api-working.host.com"
    health_check       uri=/__health?from=nginx        passes=3        interval=5s        fails=3;
  }
}

location ~ \.php$ {
proxy_set_header Host $http_host;
proxy_pass http://unitcpu;
error_page 502 =503 /apibusy.html;
health_check uri=/hcheck.php match=server_ok interval=5s;
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.