Welcome! Log In Create A New Profile

Advanced

Re: upstream prematurely closed connection while reading response header from upstream

Jiri Horky
October 17, 2014 08:38AM
Hi Maxim,

so I dig into it a bit further and it seems as that I made an error in
pairing of tcpdump outputs with nginx error logs. I triple checked it
now and the upstream server really killed the connection without sending
a response in cases nginx triggers an error.

I am sorry for the noise.

Regards
Jiri Horky

On 10/17/2014 08:01 AM, Jiri Horky wrote:
> Hi Max,
>
> thanks for the explanation, I agree that from the traces, it really
> looks like there were no data available in the socket from the upstream,
> thus a different situation than I posted the first time. I will try to
> reproduce the problem with both, debug log and wireshark traces that
> will match and will get back to you.
>
> Jirka H.
>
> On 10/17/2014 06:58 AM, Maxim Dounin wrote:
>> Hello!
>>
>> On Fri, Oct 17, 2014 at 12:48:43AM +0200, Jiri Horky wrote:
>>
>> [...]
>>
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 connect to 1.1.1.1:8888,
>>> fd:184 #12552
>> Here connection is stablished to an upstream server.
>>
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http upstream connect: -2
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 posix_memalign:
>>> 0000000000FD3720:128 @16
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 event timer add: 184:
>>> 90000:1413499405670
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http finalize request: -4,
>>> "/en-us/?" a:1, c:2
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http request count:2 blk:0
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 delete posted event
>>> 0000000000D36008
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http run request: "/en-us/?"
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http upstream check client,
>>> write event:1, "/en-us/"
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http upstream recv(): -1
>>> (11: Resource temporarily unavailable)
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 post event 0000000000D39818
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 delete posted event
>>> 0000000000D39818
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http upstream request:
>>> "/en-us/?"
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http upstream send request
>>> handler
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http upstream send request
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 chain writer buf fl:1 s:1618
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 chain writer in:
>>> 0000000001041398
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 writev: 1618
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 chain writer out:
>>> 0000000000000000
>> Request is written into the socket.
>>
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 event timer del: 184:
>>> 1413499405670
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 event timer add: 184:
>>> 90000:1413499405670
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 post event 0000000000D33008
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 post event 0000000000D39818
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 delete posted event
>>> 0000000000D33008
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http upstream request:
>>> "/en-us/?"
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 http upstream process header
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 malloc: 0000000000FAB620:8192
>>> 2014/10/17 00:41:55 [debug] 27396#0: *12485 recv: fd:184 0 of 8192
>>> 2014/10/17 00:41:55 [error] 27396#0: *12485 upstream prematurely closed
>>> connection while reading response header from upstream, client:
>>> 68.39.176.125, server: my.upstream.com, request: "GET /en-us/ HTTP/1.1",
>>> upstream: "http://1.1.1.1:8888/en-us/", host: "my.upstream.com",
>>> referrer: "https://id.upstream.com/en-us/confirm/registration?token=TOKEN"
>> And reading from the socket indicate that it's closed.
>>
>> Packet traces you previously posted look unrelated to this debug
>> log. In this case, there is either no response from the upstream
>> at all, or the response contents are lost due to RST from the
>> upstream.
>>
>> If, as you wrote, your backend "forcibly closes the connection",
>> the reason may be that it does so by using close() with SO_LINGER,
>> and this in turn results in RST being send to nginx in some cases.
>>
>> In either case debug log suggests there is nothing wrong on nginx
>> side, you should focus on your backend instead.
>>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

upstream prematurely closed connection while reading response header from upstream

Jiri Horky October 16, 2014 04:18AM

Re: upstream prematurely closed connection while reading response header from upstream

Maxim Dounin October 16, 2014 09:38AM

Re: upstream prematurely closed connection while reading response header from upstream

Jiri Horky October 16, 2014 03:36PM

Re: upstream prematurely closed connection while reading response header from upstream

Maxim Dounin October 16, 2014 04:42PM

Re: upstream prematurely closed connection while reading response header from upstream

Jiri Horky October 16, 2014 06:28PM

Re: upstream prematurely closed connection while reading response header from upstream

Jiri Horky October 16, 2014 06:50PM

Re: upstream prematurely closed connection while reading response header from upstream

Maxim Dounin October 17, 2014 01:00AM

Re: upstream prematurely closed connection while reading response header from upstream

Jiri Horky October 17, 2014 02:02AM

Re: upstream prematurely closed connection while reading response header from upstream

Jiri Horky October 17, 2014 08:38AM

Re: upstream prematurely closed connection while reading response header from upstream

bcx December 05, 2014 03:58AM

Re: upstream prematurely closed connection while reading response header from upstream

rambabuy July 22, 2019 07:03AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 268
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready