Maxim Dounin
November 20, 2014 01:08PM
Hello!

On Thu, Nov 20, 2014 at 05:34:58PM +0100, Roman Borschel wrote:

> Hi Maxim,
>
> thanks for the quick answer and the pointer to the earlier discussion. May
> I ask if there is there a specific reason for the past discussion not
> leading to the issue getting resolved, i.e. is this bug on the roadmap
> somewhere or should I file an issue?

I haven't checked, but likely it's not yet in the trac. It's
probably a good idea to add a ticket to make sure it won't be
forgotten.

>
> - Roman
>
>
> > On Wed, Nov 19, 2014 at 02:13:41PM +0100, Roman Borschel wrote:
> >
> > > Hi,
> > >
> > > I'm experiencing an issue whereby nginx and the upstream server get into
> > > disagreement about the state of the HTTP interaction, apparently caused
> > by
> > > nginx not transmitting the complete request body. The scenario is as
> > > follows, using nginx as a reverse proxy with upstream keepalive:
> > >
> > > 1. Client sends a POST request to nginx with a Content-Length header and
> > a
> > > relatively large body, i.e. spanning many TCP segments.
> > > 2. Nginx forwards the request line and headers and starts forwarding the
> > > body to the upstream server.
> > > 3. While nginx is still sending, the upstream server responds early with
> > a
> > > 409 based on information in the request headers, without consuming the
> > body.
> > > 4. Nginx eventually stops sending the body, i.e. it does not transmit the
> > > full number of bytes as specified in the Content-Length, presumably
> > because
> > > of the server response.
> > > 5. Nginx reuses the same upstream connection for a different request, in
> > > this case a GET request.
> > > 6. The upstream server does not see this as a new HTTP request, as it is
> > > still awaiting more data according to the Content-Length.
> > >
> > > At this point the client who sent the GET request and nginx wait for a
> > > response while the upstream server is waiting for more data until one of
> > > them hits a timeout (whichever has the lowest timeout) which eventually
> > > results in the connection being closed.
> > >
> > > According to RFC2616, 8.2.2 [1] if the request contained a Content-Length
> > > and the client (nginx in this case) ceases to transmit the body (due to
> > an
> > > error response) the client (nginx) would have to close the connection,
> > > which does not happen.
> > >
> > > I am reasonably certain that the client is always transmitting the full
> > > body as the problem does not occur when the client talks directly to the
> > > upstream server and an otherwise identical request/response pattern (i.e.
> > > an early error response).
> > >
> > > Can someone clarify on whether this is expected behaviour / as designed
> > on
> > > behalf of nginx?
> >
> > This is a bug, currently keepalive connections cache doesn't know
> > about the fact that nginx stopped sending the body early and the
> > connection shouldn't be cached. Some earlier discussion and
> > an attempt to fix this can be found in the thread here:
> >
> > http://mailman.nginx.org/pipermail/nginx-devel/2012-March/002040.html
> >
> > Trivial workaround is to disable use of keepalive connections
> > (actually, this is the default) if your backend behaves this way.
> >
> > --
> > Maxim Dounin
> > http://nginx.org/
> >

> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


--
Maxim Dounin
http://nginx.org/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Incomplete HTTP request body sent to upstream

Roman Borschel November 19, 2014 08:14AM

Re: Incomplete HTTP request body sent to upstream

Maxim Dounin November 19, 2014 10:12AM

Re: Incomplete HTTP request body sent to upstream

Roman Borschel November 20, 2014 11:36AM

Re: Incomplete HTTP request body sent to upstream

Maxim Dounin November 20, 2014 01:08PM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 91
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready