January 10, 2015 10:16AM
We have a Java based reverse proxy(developed in-house) which is talking to Nginx which is a proxy_pass for gunicorn server(python/django). The HTTP request flows from Java reverse proxy (JRPxy) to nginx to gunicorn. All these servers are running on the same machine.

Previously JRPxy was sending Connection: keep-alive to nginx to reuse the connections. However we decided to instead send Connection: close header and use a new connection for every request. Since we made this change we see nginx returning 499 status code.

I debugged the JRPxy at my end. I see that each time we write the request headers & body and the very next moment we try to read nginx response we get 0 (no bytes) or -1(eof) as the number of bytes read. When we get 0 we eventually get -1 subsequently (EOF after reading no bytes).

From the perspective of code, we do Socket.shutdownOutput() (http://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#shutdownOutput%28%29) each time we send Connection:close header. In Java's terms it indicates to the remote socket that it is done sending data (http://stackoverflow.com/questions/15206605/purpose-of-socket-shutdownoutput). If I comment this line alone and still sending the Connection:close header, I get valid 200 OK response.

I have caputred the netstat output to see the connection state. When we do Socket.shutdownOutput() we see TIME_WAIT from nginx's end indicating that nginx initiated the socket close and is now waiting for an ACK from JRPxy.

------------------------------------------------------------
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (59.17/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (58.14/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (57.12/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (56.09/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (55.07/0/0)
------------------------------------------------------------

However if I comment the Socket.shutdownOutput() I see the netstat output in reverse way. This time JRPxy is in TIME_WAIT state, indicating it initiated the socket close.

----------------------------------------------------------------------
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (59.59/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (58.57/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (57.54/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47385 127.0.0.1:8888 TIME_WAIT - timewait (59.87/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (56.52/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47385 127.0.0.1:8888 TIME_WAIT - timewait (58.85/0/0)
------------------------------------------------------------------------

By any chance is Socket.shutdownOutput() indicating to nginx that it is closing the connection and hence nginx is sending 499? If that is true then should nginx treat this as half-close and still send back the data?

My other assumption is that nginx is responding very quickly and closing the socket immediately even before JRPxy gets a chance to read from the socket. This is less likely as there are delays due to gunicorn processing.
Subject Author Posted

Nginx behind a reverse proxy sending 499

ppwm January 10, 2015 10:16AM

Re: Nginx behind a reverse proxy sending 499

ppwm January 11, 2015 01:37AM

Re: Nginx behind a reverse proxy sending 499

Maxim Dounin January 12, 2015 09:06AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 251
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready