Welcome! Log In Create A New Profile

Advanced

Re: Broken pipe while sending request to upstream

Maxim Dounin
September 18, 2013 09:24AM
Hello!

On Wed, Sep 18, 2013 at 02:52:39AM -0400, Claudio wrote:

> Hi Maxim.
>
> Maxim Dounin Wrote:
> -------------------------------------------------------
> > As long as a connection is closed before nginx is able to get a
> > response - it looks like a problem in your backend. Normally such
> > connections need lingering close to make sure a client has a chance
> > to read a response.
>
> Thanks for your prompt response!
>
> I read an illustrative description about the lingering close here
> (https://mail-archives.apache.org/mod_mbox/httpd-dev/199701.mbox/%3CPine.BSF.3.95.970121215226.12598N-100000@alive.ampr.ab.ca%3E)
> and now better understand the problem per se.
>
> What I'm not getting straight is why nginx does not see the response
> (assuming it really was sent off by the server). Does nginx try to read data
> from the connection while sending or when an error occurs during send?
> (Sorry for those dumb questions, but obviously I don't have the slightest
> idea how nginx works...)
>
> According to jetty's documentation, "Jetty attempts to gently close all
> TCP/IP connections with proper half close semantics, so a linger timeout
> should not be required and thus the default is -1." Would this actually
> enable nginx to see the response from the server? Or is it really necessary
> to fully read the body before sending a response, as indicated by this
> (http://kudzia.eu/b/2012/01/switching-from-apache2-to-nginx-as-reverse-proxy/)
> post I found?

While sending a request nginx monitors a connection to see if
there are any data available from an upstream (using an event
method configured), and if they are - it reads the data (and
handles as a normal http response).

It doesn't try to read anything if it got a write error though,
and an error will be reported if a backend closes the connection
before nginx was able to see there are data available for reading.

Playing with settings like sendfile, sendfile_max_chunk, as well
as tcp buffers configured in your OS might be helpful if your
backend closes connection to early. The idea is to make sure
nginx won't be blocked for a long time in sendfile or so, and will
be able to detect data available for reading before an error
occurs during writing.

> I don't know for sure about the client, but nginx is talking via HTTP/1.1 to
> the web app. Is it possible to enable the Expect: 100-continue method for
> this connection so that nginx sees the early response?

No, "Expect: 100-continue" isn't something nginx is able to use
while talking to backends.

> Alternatively, is it possible to work around this problem? Could I define
> some rules to the extent that say, if it is a POST request to that specific
> location _without_ an "Authorization" header present, strip the request
> body, set the content-length to 0 and then forward this request?

You can, but I would rather recommend digging deeper in what goes
on and fixing the root cause.

--
Maxim Dounin
http://nginx.org/en/donation.html

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Broken pipe while sending request to upstream

Claudio September 17, 2013 11:11AM

Re: Broken pipe while sending request to upstream

Maxim Dounin September 17, 2013 11:40AM

Re: Broken pipe while sending request to upstream

Claudio September 18, 2013 02:52AM

Re: Broken pipe while sending request to upstream

Maxim Dounin September 18, 2013 09:24AM

Re: Broken pipe while sending request to upstream

Claudio September 20, 2013 03:27AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 237
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready