Giter Club home page Giter Club logo

lua-resty-upstream's People

Contributors

bryant1410 avatar hamishforbes avatar p0pr0ck5 avatar pintsized avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lua-resty-upstream's Issues

Error in balancing on static files

Your example returns 500 on the static files:

2014/02/18 10:00:28 [error] 18581#0: *926 lua entry thread aborted: runtime error: /usr/local/openresty/nginx/lua/load-balancer.lua:56: attempt to call local 'reader' (a nil value)
stack traceback:
coroutine 0:
    /usr/local/openresty/nginx/lua/load-balancer.lua: in function </usr/local/openresty/nginx/lua/load-balancer.lua:1>, client: 192.168.0.219, server: balancer, request: "GET /i/new/sprites.png HTTP/1.1", host: "balancer"

in this code:

local reader = res.body_reader
repeat
    local chunk, err = reader(65536)
    if err then
      ngx_log(ngx_ERR, "Read Error: "..(err or ""))
      break
    end

    if chunk then
      print(chunk)
      flush(true)
    end
until not chunk

I'm changed code to this and it's ok, but i don't sure that this change don't makes the hidden bugs.

local reader = res.body_reader
if reader then
    repeat
        local chunk, err = reader(65536)
        if err then
          ngx_log(ngx_ERR, "Read Error: "..(err or ""))
          break
        end

        if chunk then
          print(chunk)
          flush(true)
        end
until not chunk
end

example is wrong

upstream_socket  = require("resty.upstream.socket")
upstream_api = require("resty.upstream.api")

upstream, configured = socket_upstream:new("my_upstream_dict")
api = upstream_api:new(upstream)

In your example, you use socket_upstream and not upstream_socket

Several pools

I need to have several different pools (with different hosts) for load balancing on the different locations. How i can specify what pool will handle this request?

Support for additional load balancing schemes

Any plans to support additional load balancing schemes like hashing, least conn, etc? Regular old hashing wouldn't seem too hard (same with least_conn); consistent (ketama) hashing has already been done in https://github.com/agentzh/lua-resty-chash; perhaps we could wrap this? I'll happily contribute if you think this is worthwhile (I'd love to be able to wrap this lib using chashing for lua-resty-waf to connect to a cluster of persistent variable storage backends).

Unable to use the lib with last 2 days commit

Hi,

As of the last commits I'm not able to run my code (largely inspired from the basic example)

nginx: [error] init_by_lua_file error: lib/resty/upstream/socket.lua:229: attempt to index a nil value
stack traceback:
lib/resty/upstream/socket.lua:229: in function 'calc_gcd_weight'
lib/resty/upstream/socket.lua:244: in function 'save_pools'
lib/resty/upstream/api.lua:307: in function 'add_host'
app/init.lua:80: in function <app/init.lua:31>

I was actually in the process of doing the same modification you just made (peer management per worker to get more predictable behavior)
I based the algorithm from Nginx's ngx_http_upstream.c, and it seems to work OK.
Here is the commit : mtourne@3e81df5

Oh and, may I suggest this to get really fast serialization / de-serialization :
https://github.com/cloudflare/lua-capnproto

Errors

I configured primary and backup pools. like this:

if not configured then -- Only reconfigure on start, shared mem persists across a HUP
            api:create_pool({id = "primary", timeout = 1000, read_timeout = 30000, keepalive_pool = 256, keepalive_timeout = (120*1000)})

            api:create_pool({id = "backup", timeout = 1000, read_timeout = 30000, keepalive_pool = 256, keepalive_timeout = (120*1000)})

            api:set_priority("primary", 0)
            api:set_priority("backup", 10)

            api:add_host("backup", { host = "192.168.0.2", port = "80", weight = 10, healthcheck = true})
            api:add_host("primary", { host = "192.168.0.3", port = "80",  weight = 5, healthcheck = true})
end

And after several requests i have this errors in log:

2014/03/12 07:27:09 [error] 19483#0: [lua] http.lua:165: check_response(): closed from 1, context: ngx.timer
2014/03/12 07:27:09 [error] 19483#0: [lua] http.lua:165: check_response(): closed from 1, context: ngx.timer
2014/03/12 07:27:09 [error] 19484#0: [lua] http.lua:165: check_response(): closed from 1, context: ngx.timer
2014/03/12 07:27:09 [error] 19484#0: [lua] http.lua:165: check_response(): closed from 1, context: ngx.timer
2014/03/12 07:27:09 [error] 19484#0: [lua] socket.lua:196: post_process(): Host "1" in Pool "primary" is down, context: ngx.timer
2014/03/12 07:27:09 [error] 19484#0: [lua] socket.lua:196: post_process(): Host "1" in Pool "backup" is down, context: ngx.timer

I made this several requests from browser and all was seemed ok.

Add statistics of handled requests for hosts and pool

Can you add the handled request statistics for pool and hosts in pool?
Somethink like this:
add requests counter into default_pool and default_host tables:

local default_pool = {
    up = true,
    method = 'round_robin',
    timeout = 2000, -- socket connect timeout
    priority = 0,
    failed_timeout = 60,
    max_fails = 3,
    requests = 0,
    hosts = {}
}
local default_host = {
    host = '',
    port = 80,
    up = true,
    weight = 0,
    failcount = 0,
    lastfail = 0,
    requests = 0
}

Then store host id, which successfully handled request, in ngx.ctx (like you store failed hosts in ngx.ctx). And increment requests counter on the pool and host in log_by_lua phase.
This is possible?

How to use it ?

Dear Developers

I'm sorry for my silly question.

I Have newest OpenResty installed.
in sort, I (when access http://127.0.0.1:8080/pools) already got :

{"primary":{"keepalive_pool":256,"hosts":[{"host":"127.0.0.1","weight":10,"port":81,"id":"1","healthcheck":{"last_check":1447815898.606,"interval":60},"failcount":0,"up":true,"lastfail":0},{"host":"127.0.0.1","weight":10,"lastfail":0,"id":"2","healthcheck":{"last_check":1447815898.606,"interval":60},"failcount":0,"up":true,"port":82}],"up":true,"read_timeout":10000,"method":"round_robin","failed_timeout":60,"timeout":100,"keepalive_timeout":120000,"priority":0,"max_fails":3}}

Note : I striped out all ssl parts.

for port 80, I have :

    server {
        listen 80;
        server_name lua-load-balancer;

        location / {
            content_by_lua_file /usr/local/openresty/nginx/html/lua-load-balancer.lua;
        }
    }

yes I copied lua-load-balancer.lua to that directory and chmod +x.

But when I access http://127.0.0.1 , I got 404

Kindly please give me any enlightenment.

Sincerely
-bino-

Problems with module

After several hours of load balancing with 2 hosts, i have 504 in all requests and this errors:

2014/02/24 13:33:51 [error] 5432#0: *17441 [lua] http.lua:231: request(): Upstream Error: 504, client: X.X.X.X, server: example.com, request: "GET / HTTP/1.1", host: "example.com"

But these 2 hosts is ok. Then i restart nginx (only restart) - all is well.
And after several hours this situation repeats.

http v2 not supported yet

When I test this module, I found a error when sending request over HTTP2.0

load-balancer.lua:21: Error getting client body reader: http v2 not supported yet, client: 10.10.10.1, server: , request: "GET /favicon.ico HTTP/2.0", host: "www.example.cc"

This will cause all POST request failed. It seems like the reason is the function ngx_http_lua_ngx_req_raw_header and ngx_http_lua_req_socket in lua-nginx-module do not support SPDY and H2 yet.

So is there any way to fix this ?

BTW, thank u for this module, I learn a lot from this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.