Giter Club home page Giter Club logo

Comments (11)

silvioprog avatar silvioprog commented on July 22, 2024

Hello @Al-Muhandis , I took a look at the following relevant log line you posted:

...
Hit process or system resource limit at 1 connections, temporarily suspending accept(). Consider setting a lower MHD_OPTION_CONNECTION_LIMIT.
...

It is very recommended to change the following properties to deploy an application in production:

TBrookHTTPServer.ConnectionLimit -> Limit of concurrent connections.
TBrookHTTPServer.ConnectionTimeout -> Inactivity time to a client get time out.
TBrookHTTPServer.ThreadPoolSize -> Size for the thread pool (on Linux, you can get it via "$ getconf _NPROCESSORS_ONLN").
TBrookHTTPServer.Threaded -> If True, the server creates one thread per connection.

These three properties above influence the performance and capacity of your server. For example, I've used the following configuration in our customers:

BrookHTTPServer1.ConnectionLimit := 1000; // Change to 10000 for C10K problem - http://www.kegel.com/c10k.html
// BrookHTTPServer1.ConnectionTimeout // Not required for the test
BrookHTTPServer1.ThreadPoolSize := GetCPUCount; // Get the available online CPUs
BrookHTTPServer1.Threaded := False; // Disabled, since we are using the thread pool

I've used the same configuration to do benchmarkings. The GetCPUCount is a small function to get the available online CPUs, something like this.

I have another great suggestion for you. It is very recommended to do some profiling in our server in production. This link contains the tools and command lines to do some profiling simulating a congestion situation with multiple incoming requests simultaneously. I've used ab, wrk and JMeter.

Let me know about your tests and if the tools could help you. :-)

from brookframework.

silvioprog avatar silvioprog commented on July 22, 2024

Hello @Al-Muhandis , did you solve this problem? :-)

from brookframework.

Al-Muhandis avatar Al-Muhandis commented on July 22, 2024

Hello, @silvioprog !
Sorry, I was on a trip. Just got home today. I'll try to use your advices later. Thank you for your valuable comments!

from brookframework.

Al-Muhandis avatar Al-Muhandis commented on July 22, 2024

Yes, it is works.
Some notes:

  1. Application.Server.ThreadPoolSize=2; is not working for me. Although nproc gives 2 on my Debian. Maybe the matter is that it is cloud VPS?
  2. Application.Server.ConnectionTimeout where can I find the default value?

from brookframework.

silvioprog avatar silvioprog commented on July 22, 2024
  1. Application.Server.ThreadPoolSize=2; is not working for me. ...

Some error?

... Although nproc gives 2 on my Debian.

Theoretically, any ThreadPoolSize value bigger than 1 works properly. :-)

... Maybe the matter is that it is cloud VPS?

Usually, the company which deliver the VPS service provides the CPU numbers in a "service features" window. (An example here)

  1. Application.Server.ConnectionTimeout where can I find the default value?

Hm... indeed it should be documented (I'll fix this issue soon). The default value is ConnectionTimeout := 0, i.e., infinity timeout.

from brookframework.

Al-Muhandis avatar Al-Muhandis commented on July 22, 2024

Some error?

HTTP 502

Usually, the company which deliver the VPS service provides the CPU numbers in a "service features" window. (An example here)

Yes, Hetzner about CX40 also writes: 2 core. But my server is using nginx, maybe that's it.

  1. Application.Server.ConnectionTimeout where can I find the default value? Anyway, it's not that important to me.

Hm... indeed it should be documented (I'll fix this issue soon). The default value is ConnectionTimeout := 0, i.e., infinity timeout.

Thank you for the answer!

from brookframework.

silvioprog avatar silvioprog commented on July 22, 2024

HTTP 502

Maybe the Nginx logs reveals more details. :-)

But my server is using nginx, maybe that's it.

Hm... is your Brook application running as reverse proxy?

You can improve Nginx doing some changes in the nginx.conf, for example, take a look at this link. These options are very required to make Nginx able to process many requests.

Thank you for the answer!

You're welcome dude! :-)

from brookframework.

Al-Muhandis avatar Al-Muhandis commented on July 22, 2024

Maybe the Nginx logs reveals more details. :-)

Like this?
2018/12/26 22:48:04 [error] 15125#15125: *3359847 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: sample.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8085/", host: "ww.sample.com"

Hm... is your Brook application running as reverse proxy?

My app works like 127.0.0.1:8085 and through nginx works with users

	location / {				
		location ~* ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpe?g|avi|zip|gz|bz2?|rar|swf|txt)$ {
			try_files $uri $uri/ @fallback;
			expires 1d;
		}
		proxy_pass http://127.0.0.1:8085;
	}

You can improve Nginx doing some changes in the nginx.conf, for example, take a look at this link. These options are very required to make Nginx able to process many requests.

Already the same in my nginx

user www-data;
worker_processes  2;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

... ... ... ... ..

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;
... ... ... ...

from brookframework.

Al-Muhandis avatar Al-Muhandis commented on July 22, 2024

Anyway, thanks for the advice! I used them. If necessary the topic can be closed

from brookframework.

silvioprog avatar silvioprog commented on July 22, 2024

Interesting. I've used reverse proxy some years ago on FastCGI. This way in pure HTTP seems awesome too.

Some tips: if your Nginx is 1.9.10 or higher, change:

#worker_processes  2;
worker_processes auto;

add:

worker_cpu_affinity auto;

and change:

events {
#    worker_connections  1024;
    worker_connections 10000;
}

If you don't need an access log, it can be disabled via access_log off; at section http.

The worker_processes auto facility is very good on Nginx. It would be nice to use something like this in Sagui library:

sg_httpsrv_set_thr_pool_size(srv, 0); // Use zero to choose the value automatically.

I'll check this possibility and open some issue tagged as "feature request". :-)

Anyway, thanks for the advice! I used them. If necessary the topic can be closed

You're welcome. :-) Did you solve the problem? If so, please close if it is ok.

from brookframework.

Al-Muhandis avatar Al-Muhandis commented on July 22, 2024

add:

worker_cpu_affinity auto;

and change:

events {
#    worker_connections  1024;
    worker_connections 10000;
}

Thanks. I will try

You're welcome. :-) Did you solve the problem?

I didn't localize the memory leak, but I think the framework had nothing to do with it.

Interesting. I've used reverse proxy some years ago on FastCGI. This way in pure HTTP seems awesome too.

Work with static files is better to charge nginx, I think it will cope better. And besides, I don't know how to configure the standard HTTP port and the internal custom port of my application.

from brookframework.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.