Welcome! Log In Create A New Profile

Advanced

Re: upstream keepalive - call for testing

All files from this thread

File Name File Size   Posted by Date  
upstream.patch 2.3 KB open | download Matthieu Tourne 08/12/2011 Read message
ngx_http_upstream_keepalive.patch 542 bytes open | download Matthieu Tourne 08/12/2011 Read message
upstream.patch 2.6 KB open | download Matthieu Tourne 08/12/2011 Read message
Maxim Dounin
August 12, 2011 04:00PM
Hello!

On Fri, Aug 12, 2011 at 12:32:26PM -0700, Matthieu Tourne wrote:

> Hi all,
>
> I think I have found a small issue, if we're using proxy_pass to talk to an
> origin that doesn't support keep alives.
> The origin will return a HTTP header "Connection: close", and terminate the
> connection (TCP FIN).
> We don't take this into account, and assume there is a keep-alive connection
> available.
> The next time the connection is used, it won't be part of a valid TCP
> stream, and the origin server will send a TCP RST.

Yes, I'm aware of this, thank you. Actually, this is harmless:
upstream keepalive module should detect connection was closed
while keeping it, and even if it wasn't able to do so - nginx will
re-try sending request if sending to cached connection fails.

> This can be simulated with 2 nginx instances, one acting as a reverse proxy
> with keep alive connection. And the other using the directive
> keepalive_timeout 0; (which will always terminate connections right away).
>
> The patches attached take into account the response of the origin, and
> should fix this issue.

I'm planing to add similar patch, thanks.

Maxim Dounin

>
> Matthieu.
>
> On Mon, Aug 8, 2011 at 2:36 AM, SplitIce <mat999@gmail.com> wrote:
>
> > Oh and I havent been able to reproduce the crash, I tried for a while but
> > gave up. if it happens again ill build with debugging and restart howeaver
> > so far its been 36hours without issues (under a significant amount of
> > traffic)
> >
> >
> > On Mon, Aug 8, 2011 at 7:35 PM, SplitIce <mat999@gmail.com> wrote:
> >
> >> 50ms per HTTP request (taken from firebug and chrome resource panel) as
> >> the time it takes the html to load from request to arrival.
> >> 200ms is the time saved by when the http starts transfering to me
> >> (allowing other resources to begin downloading before the HTML completes),
> >> previously the html only started transfering after the full request was
> >> downloaded to the proxy server (due to buffering)
> >>
> >> HTTP to talk to the backends (between countries)
> >>
> >> The node has a 30-80ms ping time between the backend and frontend.
> >> (Russia->Germany, Sweden->NL, Ukraine->Germany/NL etc)
> >>
> >> On Mon, Aug 8, 2011 at 7:22 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
> >>
> >>> Hello!
> >>>
> >>> On Mon, Aug 08, 2011 at 02:44:12PM +1000, SplitIce wrote:
> >>>
> >>> > Been testing this on my servers now for 2 days now, handling
> >>> approximately
> >>> > 100mbit of constant traffic (3x20mbit, 1x40mbit).
> >>> >
> >>> > Havent noticed any large bugs, had an initial crash on one of the
> >>> > servers however havent been able to replicate. The servers are a
> >>> mixture of
> >>> > openvz, XEN and one vmware virtualised containers running debian lenny
> >>> or
> >>> > squeeze,
> >>>
> >>> By "crash" you mean nginx segfault? If yes, it would be great to
> >>> track it down (either to fix problem in keepalive patch or to
> >>> prove it's unrelated problem).
> >>>
> >>> > Speed increases from this module are decent, approximately 50ms from
> >>> the
> >>> > request time and the HTTP download starts 200ms earler resulting in a
> >>> 150ms
> >>> > quicker load time on average.
> >>>
> >>> Sounds cool, but I don't really understand what "50ms from the
> >>> request time" and "download starts 200ms earler" actually means.
> >>> Could you please elaborate?
> >>>
> >>> And, BTW, do you use proxy or fastcgi to talk to backends?
> >>>
> >>> Maxim Dounin
> >>>
> >>> >
> >>> > all in all, seems good.
> >>> >
> >>> > Thanks for all your hard work Maxim.
> >>> >
> >>> > On Thu, Aug 4, 2011 at 4:51 PM, Maxim Dounin <mdounin@mdounin.ru>
> >>> wrote:
> >>> >
> >>> > > Hello!
> >>> > >
> >>> > > On Wed, Aug 03, 2011 at 05:06:56PM -0700, Matthieu Tourne wrote:
> >>> > >
> >>> > > > Hi,
> >>> > > >
> >>> > > > I'm trying to use keepalive http connections for proxy_pass
> >>> directives
> >>> > > > containing variables.
> >>> > > > Currently it only works for named upstream blocks.
> >>> > > >
> >>> > > > I'm wondering what would be the easiest way,
> >>> > > > maybe setting peer->get to ngx_http_upstream_get_keepalive_peer and
> >>> > > > kp->original_get_peer to ngx_http_upstream_get_round_robin_peer()
> >>> towards
> >>> > > > the end of ngx_http_create_round_robin_peer().
> >>> > > > If I can figure how to set kp->conf to something sane this might
> >>> work :)
> >>> > > >
> >>> > > > Thoughts ?
> >>> > >
> >>> > > You may try to pick one from upstream's main conf upstreams
> >>> > > array (e.g. one from first found upstream with init set to
> >>> > > ngx_http_upstream_init_keepalive). Dirty, but should work.
> >>> > >
> >>> > > Maxim Dounin
> >>> > >
> >>> > > >
> >>> > > > Thank you,
> >>> > > > Matthieu.
> >>> > > >
> >>> > > > On Tue, Aug 2, 2011 at 10:21 PM, SplitIce <mat999@gmail.com>
> >>> wrote:
> >>> > > >
> >>> > > > > Ive been testing this on my localhost and one of my live servers
> >>> (http
> >>> > > > > backend) for a good week now, I haven't had any issues that I
> >>> have
> >>> > > noticed
> >>> > > > > as of yet.
> >>> > > > >
> >>> > > > > Servers are Debian Lenny and Debian Squeeze (oldstable, stable)
> >>> > > > >
> >>> > > > > Hoping it will make it into the developer (1.1.x) branch soon :)
> >>> > > > >
> >>> > > > >
> >>> > > > > On Wed, Aug 3, 2011 at 1:57 PM, liseen <liseen.wan@gmail.com>
> >>> wrote:
> >>> > > > >
> >>> > > > >> Hi
> >>> > > > >>
> >>> > > > >> Could nginx keepalive work with HealthCheck? Does Maxim Dounin
> >>> have
> >>> > > a
> >>> > > > >> support plan?
> >>> > > > >>
> >>> > > > >>
> >>> > > > >>
> >>> > > > >> On Wed, Aug 3, 2011 at 3:09 AM, David Yu <
> >>> david.yu.ftw@gmail.com>
> >>> > > wrote:
> >>> > > > >>
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>> On Wed, Aug 3, 2011 at 2:47 AM, Maxim Dounin <
> >>> mdounin@mdounin.ru>
> >>> > > wrote:
> >>> > > > >>>
> >>> > > > >>>> Hello!
> >>> > > > >>>>
> >>> > > > >>>> On Wed, Aug 03, 2011 at 01:53:30AM +0800, David Yu wrote:
> >>> > > > >>>>
> >>> > > > >>>> > On Wed, Aug 3, 2011 at 1:50 AM, Maxim Dounin <
> >>> mdounin@mdounin.ru>
> >>> > > > >>>> wrote:
> >>> > > > >>>> >
> >>> > > > >>>> > > Hello!
> >>> > > > >>>> > >
> >>> > > > >>>> > > On Wed, Aug 03, 2011 at 01:42:13AM +0800, David Yu wrote:
> >>> > > > >>>> > >
> >>> > > > >>>> > > > On Wed, Aug 3, 2011 at 1:36 AM, Maxim Dounin <
> >>> > > mdounin@mdounin.ru>
> >>> > > > >>>> wrote:
> >>> > > > >>>> > > >
> >>> > > > >>>> > > > > Hello!
> >>> > > > >>>> > > > >
> >>> > > > >>>> > > > > On Tue, Aug 02, 2011 at 04:24:45PM +0100, António P.
> >>> P.
> >>> > > Almeida
> >>> > > > >>>> wrote:
> >>> > > > >>>> > > > >
> >>> > > > >>>> > > > > > On 1 Ago 2011 17h07 WEST, mdounin@mdounin.ru wrote:
> >>> > > > >>>> > > > > >
> >>> > > > >>>> > > > > > > Hello!
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > > JFYI:
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > > Last week I posted patch to nginx-devel@ which
> >>> adds
> >>> > > > >>>> keepalive
> >>> > > > >>>> > > > > > > support to various backends (as with upstream
> >>> keepalive
> >>> > > > >>>> module),
> >>> > > > >>>> > > > > > > including fastcgi and http backends (this in turn
> >>> means
> >>> > > > >>>> nginx now
> >>> > > > >>>> > > > > > > able to talk HTTP/1.1 to backends, in particular
> >>> it now
> >>> > > > >>>> > > > > > > understands chunked responses). Patch applies to
> >>> 1.0.5
> >>> > > and
> >>> > > > >>>> 1.1.0.
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > > Testing is appreciated.
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > > You may find patch and description here:
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > >
> >>> > > > >>>>
> >>> > > http://mailman.nginx.org/pipermail/nginx-devel/2011-July/001057.html
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > > Patch itself may be downloaded here:
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > >
> >>> http://nginx.org/patches/patch-nginx-keepalive-full.txt
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > > Upstream keepalive module may be downloaded here:
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > > > http://mdounin.ru/hg/ngx_http_upstream_keepalive/
> >>> > > > >>>> > > > > > >
> >>> > > > >>>>
> >>> http://mdounin.ru/files/ngx_http_upstream_keepalive-0.4.tar.gz
> >>> > > > >>>> > > > > > >
> >>> > > > >>>> > > > > >
> >>> > > > >>>> > > > > > So *either* we use the patch or use the module.
> >>> Correct?
> >>> > > > >>>> > > > >
> >>> > > > >>>> > > > > No, to keep backend connections alive you need module
> >>> *and*
> >>> > > > >>>> patch.
> >>> > > > >>>> > > > > Patch provides foundation in nginx core for module to
> >>> work
> >>> > > with
> >>> > > > >>>> > > > > fastcgi and http.
> >>> > > > >>>> > > > >
> >>> > > > >>>> > > > With a custom nginx upstream binary protocol, I believe
> >>> > > > >>>> multiplexing will
> >>> > > > >>>> > > > now be possible?
> >>> > > > >>>> > >
> >>> > > > >>>> > > ENOPARSE, sorry.
> >>> > > > >>>> > >
> >>> > > > >>>> > After some googling ...
> >>> > > > >>>> > ENOPARSE is a nerdy term. It is one of the standard C
> >>> library
> >>> > > error
> >>> > > > >>>> codes
> >>> > > > >>>> > that can be set in the global variable "errno" and stands
> >>> for
> >>> > > Error No
> >>> > > > >>>> > Parse. Since you didn't get it, I can thus conclude that
> >>> unlike me
> >>> > > you
> >>> > > > >>>> are probably
> >>> > > > >>>> > a normal, well adjusted human being ;-)
> >>> > > > >>>>
> >>> > > > >>>> Actually, this definition isn't true: there is no such error
> >>> code,
> >>> > > > >>>> it's rather imitation. The fact that author of definition
> >>> claims
> >>> > > > >>>> it's real error indicate that unlike me, he is normal, well
> >>> > > > >>>> adjusted human being. ;)
> >>> > > > >>>>
> >>> > > > >>>> > Now I get it. Well adjusted I am.
> >>> > > > >>>>
> >>> > > > >>>> Now you may try to finally explain what you mean to ask in
> >>> your
> >>> > > > >>>> original message. Please keep in mind that your are talking
> >>> to
> >>> > > > >>>> somebody far from being normal and well adjusted. ;)
> >>> > > > >>>>
> >>> > > > >>>> Maxim Dounin
> >>> > > > >>>>
> >>> > > > >>>> p.s. Actually, I assume you are talking about fastcgi
> >>> > > > >>>> multiplexing.
> >>> > > > >>>
> >>> > > > >>> Nope not fastcgi multiplexing. Multiplexing over a
> >>> custom/efficient
> >>> > > > >>> nginx binary protocol.
> >>> > > > >>> Where requests sent to upstream include a unique id w/c the
> >>> upstream
> >>> > > will
> >>> > > > >>> also send on response.
> >>> > > > >>> This allows for asychronous, out-of-bands, messaging.
> >>> > > > >>> I believe this is what mongrel2 is trying to do now ... though
> >>> as an
> >>> > > http
> >>> > > > >>> server, it is nowhere near as robust/stable as nginx.
> >>> > > > >>> If nginx implements this (considering nginx already has a lot
> >>> of
> >>> > > market
> >>> > > > >>> share), it certainly would bring more developers/users in
> >>> (especially
> >>> > > the
> >>> > > > >>> ones needing async, out-of-bands request handling)
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>> Short answer is: no, it's still not possible.
> >>> > > > >>>>
> >>> > > > >>>> _______________________________________________
> >>> > > > >>>> nginx mailing list
> >>> > > > >>>> nginx@nginx.org
> >>> > > > >>>> http://mailman.nginx.org/mailman/listinfo/nginx
> >>> > > > >>>>
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>> --
> >>> > > > >>> When the cat is away, the mouse is alone.
> >>> > > > >>> - David Yu
> >>> > > > >>>
> >>> > > > >>> _______________________________________________
> >>> > > > >>> nginx mailing list
> >>> > > > >>> nginx@nginx.org
> >>> > > > >>> http://mailman.nginx.org/mailman/listinfo/nginx
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>
> >>> > > > >> _______________________________________________
> >>> > > > >> nginx mailing list
> >>> > > > >> nginx@nginx.org
> >>> > > > >> http://mailman.nginx.org/mailman/listinfo/nginx
> >>> > > > >>
> >>> > > > >>
> >>> > > > >
> >>> > > > >
> >>> > > > > --
> >>> > > > > Warez Scene http://thewarezscene.org Free Rapidshare
> >>> Downloads<
> >>> > > http://www.nexusddl.com>
> >>> > > > >
> >>> > > > >
> >>> > > > > _______________________________________________
> >>> > > > > nginx mailing list
> >>> > > > > nginx@nginx.org
> >>> > > > > http://mailman.nginx.org/mailman/listinfo/nginx
> >>> > > > >
> >>> > > > >
> >>> > >
> >>> > > > _______________________________________________
> >>> > > > nginx mailing list
> >>> > > > nginx@nginx.org
> >>> > > > http://mailman.nginx.org/mailman/listinfo/nginx
> >>> > >
> >>> > > _______________________________________________
> >>> > > nginx mailing list
> >>> > > nginx@nginx.org
> >>> > > http://mailman.nginx.org/mailman/listinfo/nginx
> >>> > >
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Warez Scene http://thewarezscene.org Free Rapidshare
> >>> > Downloadshttp://www.nexusddl.com
> >>>
> >>> > _______________________________________________
> >>> > nginx mailing list
> >>> > nginx@nginx.org
> >>> > http://mailman.nginx.org/mailman/listinfo/nginx
> >>>
> >>> _______________________________________________
> >>> nginx mailing list
> >>> nginx@nginx.org
> >>> http://mailman.nginx.org/mailman/listinfo/nginx
> >>>
> >>
> >>
> >>
> >> --
> >> Warez Scene http://thewarezscene.org Free Rapidshare Downloadshttp://www.nexusddl.com
> >>
> >>
> >
> >
> > --
> > Warez Scene http://thewarezscene.org Free Rapidshare Downloadshttp://www.nexusddl.com
> >
> >
> > _______________________________________________
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >



> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

upstream keepalive - call for testing

Maxim Dounin August 01, 2011 12:10PM

Re: upstream keepalive - call for testing

liseen August 02, 2011 09:38AM

Re: upstream keepalive - call for testing

Maxim Dounin August 02, 2011 01:34PM

Re: upstream keepalive - call for testing

António P. P. Almeida August 02, 2011 11:28AM

Re: upstream keepalive - call for testing

Maxim Dounin August 02, 2011 01:38PM

Re: upstream keepalive - call for testing

David Yu August 02, 2011 01:44PM

Re: upstream keepalive - call for testing

Maxim Dounin August 02, 2011 01:52PM

Re: upstream keepalive - call for testing

David Yu August 02, 2011 01:54PM

Re: upstream keepalive - call for testing

Maxim Dounin August 02, 2011 02:48PM

Re: upstream keepalive - call for testing

David Yu August 02, 2011 03:10PM

Re: upstream keepalive - call for testing

liseen August 02, 2011 11:58PM

Re: upstream keepalive - call for testing

splitice August 03, 2011 01:22AM

Re: upstream keepalive - call for testing

Matthieu Tourne August 03, 2011 08:08PM

Re: upstream keepalive - call for testing

Maxim Dounin August 04, 2011 02:54AM

Re: upstream keepalive - call for testing

splitice August 08, 2011 12:46AM

Re: upstream keepalive - call for testing

Maxim Dounin August 08, 2011 05:24AM

Re: upstream keepalive - call for testing

splitice August 08, 2011 05:36AM

Re: upstream keepalive - call for testing

splitice August 08, 2011 05:38AM

Re: upstream keepalive - call for testing Attachments

Matthieu Tourne August 12, 2011 03:34PM

Re: upstream keepalive - call for testing

Maxim Dounin August 12, 2011 04:00PM

Re: upstream keepalive - call for testing

Matthieu Tourne August 12, 2011 05:14PM

Re: upstream keepalive - call for testing

Maxim Dounin August 12, 2011 06:28PM

Re: upstream keepalive - call for testing Attachments

Matthieu Tourne August 12, 2011 06:44PM

Re: upstream keepalive - call for testing

Matthieu Tourne August 16, 2011 07:32PM

Re: upstream keepalive - call for testing

Maxim Dounin August 16, 2011 08:24PM

Re: upstream keepalive - call for testing

magicbear August 24, 2011 01:11PM

Re: upstream keepalive - call for testing

Maxim Dounin August 24, 2011 08:06PM

Re: upstream keepalive - call for testing

sv August 24, 2011 09:18PM

Re: upstream keepalive - call for testing

magicbear August 25, 2011 01:30AM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 03:08AM

Re: upstream keepalive - call for testing

Maxim Dounin August 26, 2011 05:40AM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 07:01AM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 07:04AM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 07:28AM

Re: upstream keepalive - call for testing

Maxim Dounin August 26, 2011 07:38AM

upstream keepalive close connections actively

cfsego August 02, 2011 10:50PM

Re: upstream keepalive close connections actively

Maxim Dounin August 03, 2011 03:40AM

RE: upstream keepalive close connections actively

Charles Chen August 03, 2011 05:54AM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 07:54AM

Re: upstream keepalive - call for testing

Maxim Dounin August 26, 2011 11:56AM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 12:17PM

Re: upstream keepalive - call for testing

Maxim Dounin August 26, 2011 02:08PM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 03:00PM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 12:28PM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 01:00PM

Re: upstream keepalive - call for testing

magicbear August 26, 2011 01:51PM

Re: upstream keepalive - call for testing

magicbear August 28, 2011 01:07PM

Re: upstream keepalive - call for testing

magicbear August 28, 2011 01:10PM

Re: upstream keepalive - call for testing

Maxim Dounin August 28, 2011 09:48PM

Re: upstream keepalive - call for testing

magicbear August 31, 2011 04:04PM

Re: upstream keepalive - call for testing

splitice August 31, 2011 09:58PM

Re: upstream keepalive - call for testing

magicbear September 01, 2011 09:38AM

Re: upstream keepalive - call for testing

magicbear September 04, 2011 01:33PM

Re: upstream keepalive - call for testing

Maxim Dounin September 04, 2011 02:22PM

Re: upstream keepalive - call for testing

magicbear September 04, 2011 02:34PM

Re: upstream keepalive - call for testing

Maxim Dounin September 05, 2011 03:10AM

Re: upstream keepalive - call for testing

ビリビリⅤ September 05, 2011 11:44AM

Re: upstream keepalive - call for testing

Maxim Dounin September 05, 2011 02:04PM

Re: upstream keepalive - call for testing

magicbear September 06, 2011 02:39AM

Re: upstream keepalive - call for testing

Matthieu Tourne September 07, 2011 07:36PM

Re: upstream keepalive - call for testing

Maxim Dounin September 08, 2011 05:28AM

Re: upstream keepalive - call for testing

Maxim Dounin September 08, 2011 11:44AM

Re: upstream keepalive - call for testing

Matthieu Tourne September 08, 2011 06:06PM

Re: upstream keepalive - call for testing

magicbear September 14, 2011 06:54PM

Re: upstream keepalive - call for testing

magicbear September 15, 2011 01:52PM

Re: upstream keepalive - call for testing

splitice September 15, 2011 09:44PM

Re: upstream keepalive - call for testing

philipp December 29, 2011 07:47AM

Re: upstream keepalive - call for testing

Maxim Dounin December 29, 2011 10:06AM

Re: upstream keepalive - call for testing

alexscott March 08, 2012 09:30AM

Re: upstream keepalive - call for testing

Andrew Alexeev March 09, 2012 01:20AM

Re: upstream keepalive - call for testing

alexscott March 12, 2012 10:35AM

Re: upstream keepalive - call for testing

Maxim Dounin March 12, 2012 10:56AM

Re: upstream keepalive - call for testing

alexscott March 12, 2012 01:40PM

Re: upstream keepalive - call for testing

alexscott March 12, 2012 03:55PM

Re: upstream keepalive - call for testing

Maxim Dounin March 12, 2012 02:00PM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 191
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready