I'm still a little confused,
Is the peer selection algorithm guaranteed to never run at the same time for
different workers? (i-e: creating race conditions in the keepalive queue)
I see that the round robin code has a bunch of mutex locks all commented
out..
On the other hand nginx_http_upstream_check_module (healthchecks) uses
mutexes.
For faster lookups, I was thinking about a hashmap of queues, hashed on
sockaddr.
This is probably overkill for a small amount of keepalives connections
though.
I'll send a patch if I get around to implement it.
Matthieu.
On Fri, Aug 12, 2011 at 3:41 PM, Matthieu Tourne
<matthieu.tourne@gmail.com>wrote:
> Thanks for the help Maxim, I'll submit this code if I get around
> implementing it.
>
> Also, I think I used the wrong string comparison function in the patch I
> sent earlier.
> This one should work as intended in the description ..
>
> Matthieu.
>
>
> On Fri, Aug 12, 2011 at 3:27 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
>
>> Hello!
>>
>> On Fri, Aug 12, 2011 at 02:11:51PM -0700, Matthieu Tourne wrote:
>>
>> > Also, if I was planning on having a lot of different connections using
>> the
>> > upstream keepalive module.
>> > Would it make sense to convert the queues into rbtrees for faster lookup
>> ?
>>
>> Yes, it may make sense if you are planning to keep lots of
>> connections to lots of different backends (you'll still need
>> queues though, but that's details).
>>
>> Maxim Dounin
>>
>> >
>> > Thank you!
>> >
>> > Matthieu.
>> >
>> > On Fri, Aug 12, 2011 at 12:59 PM, Maxim Dounin <mdounin@mdounin.ru>
>> wrote:
>> >
>> > > Hello!
>> > >
>> > > On Fri, Aug 12, 2011 at 12:32:26PM -0700, Matthieu Tourne wrote:
>> > >
>> > > > Hi all,
>> > > >
>> > > > I think I have found a small issue, if we're using proxy_pass to
>> talk to
>> > > an
>> > > > origin that doesn't support keep alives.
>> > > > The origin will return a HTTP header "Connection: close", and
>> terminate
>> > > the
>> > > > connection (TCP FIN).
>> > > > We don't take this into account, and assume there is a keep-alive
>> > > connection
>> > > > available.
>> > > > The next time the connection is used, it won't be part of a valid
>> TCP
>> > > > stream, and the origin server will send a TCP RST.
>> > >
>> > > Yes, I'm aware of this, thank you. Actually, this is harmless:
>> > > upstream keepalive module should detect connection was closed
>> > > while keeping it, and even if it wasn't able to do so - nginx will
>> > > re-try sending request if sending to cached connection fails.
>> > >
>> > > > This can be simulated with 2 nginx instances, one acting as a
>> reverse
>> > > proxy
>> > > > with keep alive connection. And the other using the directive
>> > > > keepalive_timeout 0; (which will always terminate connections right
>> > > away).
>> > > >
>> > > > The patches attached take into account the response of the origin,
>> and
>> > > > should fix this issue.
>> > >
>> > > I'm planing to add similar patch, thanks.
>> > >
>> > > Maxim Dounin
>> > >
>> > > >
>> > > > Matthieu.
>> > > >
>> > > > On Mon, Aug 8, 2011 at 2:36 AM, SplitIce <mat999@gmail.com> wrote:
>> > > >
>> > > > > Oh and I havent been able to reproduce the crash, I tried for a
>> while
>> > > but
>> > > > > gave up. if it happens again ill build with debugging and restart
>> > > howeaver
>> > > > > so far its been 36hours without issues (under a significant amount
>> of
>> > > > > traffic)
>> > > > >
>> > > > >
>> > > > > On Mon, Aug 8, 2011 at 7:35 PM, SplitIce <mat999@gmail.com>
>> wrote:
>> > > > >
>> > > > >> 50ms per HTTP request (taken from firebug and chrome resource
>> panel)
>> > > as
>> > > > >> the time it takes the html to load from request to arrival.
>> > > > >> 200ms is the time saved by when the http starts transfering to me
>> > > > >> (allowing other resources to begin downloading before the HTML
>> > > completes),
>> > > > >> previously the html only started transfering after the full
>> request
>> > > was
>> > > > >> downloaded to the proxy server (due to buffering)
>> > > > >>
>> > > > >> HTTP to talk to the backends (between countries)
>> > > > >>
>> > > > >> The node has a 30-80ms ping time between the backend and
>> frontend.
>> > > > >> (Russia->Germany, Sweden->NL, Ukraine->Germany/NL etc)
>> > > > >>
>> > > > >> On Mon, Aug 8, 2011 at 7:22 PM, Maxim Dounin <mdounin@mdounin.ru
>> >
>> > > wrote:
>> > > > >>
>> > > > >>> Hello!
>> > > > >>>
>> > > > >>> On Mon, Aug 08, 2011 at 02:44:12PM +1000, SplitIce wrote:
>> > > > >>>
>> > > > >>> > Been testing this on my servers now for 2 days now, handling
>> > > > >>> approximately
>> > > > >>> > 100mbit of constant traffic (3x20mbit, 1x40mbit).
>> > > > >>> >
>> > > > >>> > Havent noticed any large bugs, had an initial crash on one of
>> the
>> > > > >>> > servers however havent been able to replicate. The servers are
>> a
>> > > > >>> mixture of
>> > > > >>> > openvz, XEN and one vmware virtualised containers running
>> debian
>> > > lenny
>> > > > >>> or
>> > > > >>> > squeeze,
>> > > > >>>
>> > > > >>> By "crash" you mean nginx segfault? If yes, it would be great
>> to
>> > > > >>> track it down (either to fix problem in keepalive patch or to
>> > > > >>> prove it's unrelated problem).
>> > > > >>>
>> > > > >>> > Speed increases from this module are decent, approximately
>> 50ms
>> > > from
>> > > > >>> the
>> > > > >>> > request time and the HTTP download starts 200ms earler
>> resulting in
>> > > a
>> > > > >>> 150ms
>> > > > >>> > quicker load time on average.
>> > > > >>>
>> > > > >>> Sounds cool, but I don't really understand what "50ms from the
>> > > > >>> request time" and "download starts 200ms earler" actually means.
>> > > > >>> Could you please elaborate?
>> > > > >>>
>> > > > >>> And, BTW, do you use proxy or fastcgi to talk to backends?
>> > > > >>>
>> > > > >>> Maxim Dounin
>> > > > >>>
>> > > > >>> >
>> > > > >>> > all in all, seems good.
>> > > > >>> >
>> > > > >>> > Thanks for all your hard work Maxim.
>> > > > >>> >
>> > > > >>> > On Thu, Aug 4, 2011 at 4:51 PM, Maxim Dounin <
>> mdounin@mdounin.ru>
>> > > > >>> wrote:
>> > > > >>> >
>> > > > >>> > > Hello!
>> > > > >>> > >
>> > > > >>> > > On Wed, Aug 03, 2011 at 05:06:56PM -0700, Matthieu Tourne
>> wrote:
>> > > > >>> > >
>> > > > >>> > > > Hi,
>> > > > >>> > > >
>> > > > >>> > > > I'm trying to use keepalive http connections for
>> proxy_pass
>> > > > >>> directives
>> > > > >>> > > > containing variables.
>> > > > >>> > > > Currently it only works for named upstream blocks.
>> > > > >>> > > >
>> > > > >>> > > > I'm wondering what would be the easiest way,
>> > > > >>> > > > maybe setting peer->get to
>> ngx_http_upstream_get_keepalive_peer
>> > > and
>> > > > >>> > > > kp->original_get_peer to
>> > > ngx_http_upstream_get_round_robin_peer()
>> > > > >>> towards
>> > > > >>> > > > the end of ngx_http_create_round_robin_peer().
>> > > > >>> > > > If I can figure how to set kp->conf to something sane this
>> > > might
>> > > > >>> work :)
>> > > > >>> > > >
>> > > > >>> > > > Thoughts ?
>> > > > >>> > >
>> > > > >>> > > You may try to pick one from upstream's main conf upstreams
>> > > > >>> > > array (e.g. one from first found upstream with init set to
>> > > > >>> > > ngx_http_upstream_init_keepalive). Dirty, but should work..
>> > > > >>> > >
>> > > > >>> > > Maxim Dounin
>> > > > >>> > >
>> > > > >>> > > >
>> > > > >>> > > > Thank you,
>> > > > >>> > > > Matthieu.
>> > > > >>> > > >
>> > > > >>> > > > On Tue, Aug 2, 2011 at 10:21 PM, SplitIce <
>> mat999@gmail.com>
>> > > > >>> wrote:
>> > > > >>> > > >
>> > > > >>> > > > > Ive been testing this on my localhost and one of my live
>> > > servers
>> > > > >>> (http
>> > > > >>> > > > > backend) for a good week now, I haven't had any issues
>> that I
>> > > > >>> have
>> > > > >>> > > noticed
>> > > > >>> > > > > as of yet.
>> > > > >>> > > > >
>> > > > >>> > > > > Servers are Debian Lenny and Debian Squeeze (oldstable,
>> > > stable)
>> > > > >>> > > > >
>> > > > >>> > > > > Hoping it will make it into the developer (1.1.x) branch
>> soon
>> > > :)
>> > > > >>> > > > >
>> > > > >>> > > > >
>> > > > >>> > > > > On Wed, Aug 3, 2011 at 1:57 PM, liseen <
>> liseen.wan@gmail.com
>> > > >
>> > > > >>> wrote:
>> > > > >>> > > > >
>> > > > >>> > > > >> Hi
>> > > > >>> > > > >>
>> > > > >>> > > > >> Could nginx keepalive work with HealthCheck? Does
>> Maxim
>> > > Dounin
>> > > > >>> have
>> > > > >>> > > a
>> > > > >>> > > > >> support plan?
>> > > > >>> > > > >>
>> > > > >>> > > > >>
>> > > > >>> > > > >>
>> > > > >>> > > > >> On Wed, Aug 3, 2011 at 3:09 AM, David Yu <
>> > > > >>> david.yu.ftw@gmail.com>
>> > > > >>> > > wrote:
>> > > > >>> > > > >>
>> > > > >>> > > > >>>
>> > > > >>> > > > >>>
>> > > > >>> > > > >>> On Wed, Aug 3, 2011 at 2:47 AM, Maxim Dounin <
>> > > > >>> mdounin@mdounin.ru>
>> > > > >>> > > wrote:
>> > > > >>> > > > >>>
>> > > > >>> > > > >>>> Hello!
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>> On Wed, Aug 03, 2011 at 01:53:30AM +0800, David Yu
>> wrote:
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>> > On Wed, Aug 3, 2011 at 1:50 AM, Maxim Dounin <
>> > > > >>> mdounin@mdounin.ru>
>> > > > >>> > > > >>>> wrote:
>> > > > >>> > > > >>>> >
>> > > > >>> > > > >>>> > > Hello!
>> > > > >>> > > > >>>> > >
>> > > > >>> > > > >>>> > > On Wed, Aug 03, 2011 at 01:42:13AM +0800, David
>> Yu
>> > > wrote:
>> > > > >>> > > > >>>> > >
>> > > > >>> > > > >>>> > > > On Wed, Aug 3, 2011 at 1:36 AM, Maxim Dounin <
>> > > > >>> > > mdounin@mdounin.ru>
>> > > > >>> > > > >>>> wrote:
>> > > > >>> > > > >>>> > > >
>> > > > >>> > > > >>>> > > > > Hello!
>> > > > >>> > > > >>>> > > > >
>> > > > >>> > > > >>>> > > > > On Tue, Aug 02, 2011 at 04:24:45PM +0100,
>> António
>> > > P.
>> > > > >>> P.
>> > > > >>> > > Almeida
>> > > > >>> > > > >>>> wrote:
>> > > > >>> > > > >>>> > > > >
>> > > > >>> > > > >>>> > > > > > On 1 Ago 2011 17h07 WEST,
>> mdounin@mdounin.ruwrote:
>> > > > >>> > > > >>>> > > > > >
>> > > > >>> > > > >>>> > > > > > > Hello!
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > > JFYI:
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > > Last week I posted patch to
>> nginx-devel@which
>> > > > >>> adds
>> > > > >>> > > > >>>> keepalive
>> > > > >>> > > > >>>> > > > > > > support to various backends (as with
>> upstream
>> > > > >>> keepalive
>> > > > >>> > > > >>>> module),
>> > > > >>> > > > >>>> > > > > > > including fastcgi and http backends (this
>> in
>> > > turn
>> > > > >>> means
>> > > > >>> > > > >>>> nginx now
>> > > > >>> > > > >>>> > > > > > > able to talk HTTP/1.1 to backends, in
>> > > particular
>> > > > >>> it now
>> > > > >>> > > > >>>> > > > > > > understands chunked responses). Patch
>> applies
>> > > to
>> > > > >>> 1.0.5
>> > > > >>> > > and
>> > > > >>> > > > >>>> 1.1.0.
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > > Testing is appreciated.
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > > You may find patch and description here:
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > >
>> > > > >>> > > > >>>>
>> > > > >>> > >
>> > > http://mailman.nginx.org/pipermail/nginx-devel/2011-July/001057.html
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > > Patch itself may be downloaded here:
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> http://nginx.org/patches/patch-nginx-keepalive-full.txt
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > > Upstream keepalive module may be
>> downloaded
>> > > here:
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > > >
>> > > http://mdounin.ru/hg/ngx_http_upstream_keepalive/
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>>
>> > > > >>> http://mdounin.ru/files/ngx_http_upstream_keepalive-0.4.tar.gz
>> > > > >>> > > > >>>> > > > > > >
>> > > > >>> > > > >>>> > > > > >
>> > > > >>> > > > >>>> > > > > > So *either* we use the patch or use the
>> module.
>> > > > >>> Correct?
>> > > > >>> > > > >>>> > > > >
>> > > > >>> > > > >>>> > > > > No, to keep backend connections alive you
>> need
>> > > module
>> > > > >>> *and*
>> > > > >>> > > > >>>> patch.
>> > > > >>> > > > >>>> > > > > Patch provides foundation in nginx core for
>> module
>> > > to
>> > > > >>> work
>> > > > >>> > > with
>> > > > >>> > > > >>>> > > > > fastcgi and http.
>> > > > >>> > > > >>>> > > > >
>> > > > >>> > > > >>>> > > > With a custom nginx upstream binary protocol, I
>> > > believe
>> > > > >>> > > > >>>> multiplexing will
>> > > > >>> > > > >>>> > > > now be possible?
>> > > > >>> > > > >>>> > >
>> > > > >>> > > > >>>> > > ENOPARSE, sorry.
>> > > > >>> > > > >>>> > >
>> > > > >>> > > > >>>> > After some googling ...
>> > > > >>> > > > >>>> > ENOPARSE is a nerdy term. It is one of the standard
>> C
>> > > > >>> library
>> > > > >>> > > error
>> > > > >>> > > > >>>> codes
>> > > > >>> > > > >>>> > that can be set in the global variable "errno" and
>> > > stands
>> > > > >>> for
>> > > > >>> > > Error No
>> > > > >>> > > > >>>> > Parse. Since you didn't get it, I can thus conclude
>> that
>> > > > >>> unlike me
>> > > > >>> > > you
>> > > > >>> > > > >>>> are probably
>> > > > >>> > > > >>>> > a normal, well adjusted human being ;-)
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>> Actually, this definition isn't true: there is no
>> such
>> > > error
>> > > > >>> code,
>> > > > >>> > > > >>>> it's rather imitation. The fact that author of
>> definition
>> > > > >>> claims
>> > > > >>> > > > >>>> it's real error indicate that unlike me, he is
>> normal,
>> > > well
>> > > > >>> > > > >>>> adjusted human being. ;)
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>> > Now I get it. Well adjusted I am.
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>> Now you may try to finally explain what you mean to
>> ask in
>> > > > >>> your
>> > > > >>> > > > >>>> original message. Please keep in mind that your are
>> > > talking
>> > > > >>> to
>> > > > >>> > > > >>>> somebody far from being normal and well adjusted. ;)
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>> Maxim Dounin
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>> p.s. Actually, I assume you are talking about fastcgi
>> > > > >>> > > > >>>> multiplexing.
>> > > > >>> > > > >>>
>> > > > >>> > > > >>> Nope not fastcgi multiplexing. Multiplexing over a
>> > > > >>> custom/efficient
>> > > > >>> > > > >>> nginx binary protocol.
>> > > > >>> > > > >>> Where requests sent to upstream include a unique id
>> w/c the
>> > > > >>> upstream
>> > > > >>> > > will
>> > > > >>> > > > >>> also send on response.
>> > > > >>> > > > >>> This allows for asychronous, out-of-bands, messaging..
>> > > > >>> > > > >>> I believe this is what mongrel2 is trying to do now
>> ...
>> > > though
>> > > > >>> as an
>> > > > >>> > > http
>> > > > >>> > > > >>> server, it is nowhere near as robust/stable as nginx..
>> > > > >>> > > > >>> If nginx implements this (considering nginx already
>> has a
>> > > lot
>> > > > >>> of
>> > > > >>> > > market
>> > > > >>> > > > >>> share), it certainly would bring more developers/users
>> in
>> > > > >>> (especially
>> > > > >>> > > the
>> > > > >>> > > > >>> ones needing async, out-of-bands request handling)
>> > > > >>> > > > >>>
>> > > > >>> > > > >>>
>> > > > >>> > > > >>> Short answer is: no, it's still not possible.
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>> _______________________________________________
>> > > > >>> > > > >>>> nginx mailing list
>> > > > >>> > > > >>>> nginx@nginx.org
>> > > > >>> > > > >>>> http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >>> > > > >>>>
>> > > > >>> > > > >>>
>> > > > >>> > > > >>>
>> > > > >>> > > > >>>
>> > > > >>> > > > >>> --
>> > > > >>> > > > >>> When the cat is away, the mouse is alone.
>> > > > >>> > > > >>> - David Yu
>> > > > >>> > > > >>>
>> > > > >>> > > > >>> _______________________________________________
>> > > > >>> > > > >>> nginx mailing list
>> > > > >>> > > > >>> nginx@nginx.org
>> > > > >>> > > > >>> http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >>> > > > >>>
>> > > > >>> > > > >>>
>> > > > >>> > > > >>
>> > > > >>> > > > >> _______________________________________________
>> > > > >>> > > > >> nginx mailing list
>> > > > >>> > > > >> nginx@nginx.org
>> > > > >>> > > > >> http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >>> > > > >>
>> > > > >>> > > > >>
>> > > > >>> > > > >
>> > > > >>> > > > >
>> > > > >>> > > > > --
>> > > > >>> > > > > Warez Scene http://thewarezscene.org Free Rapidshare
>> > > > >>> Downloads<
>> > > > >>> > > http://www.nexusddl.com>
>> > > > >>> > > > >
>> > > > >>> > > > >
>> > > > >>> > > > > _______________________________________________
>> > > > >>> > > > > nginx mailing list
>> > > > >>> > > > > nginx@nginx.org
>> > > > >>> > > > > http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >>> > > > >
>> > > > >>> > > > >
>> > > > >>> > >
>> > > > >>> > > > _______________________________________________
>> > > > >>> > > > nginx mailing list
>> > > > >>> > > > nginx@nginx.org
>> > > > >>> > > > http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >>> > >
>> > > > >>> > > _______________________________________________
>> > > > >>> > > nginx mailing list
>> > > > >>> > > nginx@nginx.org
>> > > > >>> > > http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >>> > >
>> > > > >>> >
>> > > > >>> >
>> > > > >>> >
>> > > > >>> > --
>> > > > >>> > Warez Scene http://thewarezscene.org Free Rapidshare
>> > > > >>> > Downloadshttp://www.nexusddl.com
>> > > > >>>
>> > > > >>> > _______________________________________________
>> > > > >>> > nginx mailing list
>> > > > >>> > nginx@nginx.org
>> > > > >>> > http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >>>
>> > > > >>> _______________________________________________
>> > > > >>> nginx mailing list
>> > > > >>> nginx@nginx.org
>> > > > >>> http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >>>
>> > > > >>
>> > > > >>
>> > > > >>
>> > > > >> --
>> > > > >> Warez Scene http://thewarezscene.org Free Rapidshare
>> Downloads<
>> > > http://www.nexusddl.com>
>> > > > >>
>> > > > >>
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Warez Scene http://thewarezscene.org Free Rapidshare Downloads<
>> > > http://www.nexusddl.com>
>> > > > >
>> > > > >
>> > > > > _______________________________________________
>> > > > > nginx mailing list
>> > > > > nginx@nginx.org
>> > > > > http://mailman.nginx.org/mailman/listinfo/nginx
>> > > > >
>> > > > >
>> > >
>> > >
>> > >
>> > > > _______________________________________________
>> > > > nginx mailing list
>> > > > nginx@nginx.org
>> > > > http://mailman.nginx.org/mailman/listinfo/nginx
>> > >
>> > > _______________________________________________
>> > > nginx mailing list
>> > > nginx@nginx.org
>> > > http://mailman.nginx.org/mailman/listinfo/nginx
>> > >
>>
>> > _______________________________________________
>> > nginx mailing list
>> > nginx@nginx.org
>> > http://mailman.nginx.org/mailman/listinfo/nginx
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx