Hello!
On Sun, Jul 09, 2017 at 11:26:47PM +0200, Arnaud Le-roy wrote:
> i encountered a strange behaviour with nginx, my backend seems
> to receive twice the same request from nginx proxy, to be sure
> that it's not the client that send two request i have had an
> uuid params to each request.
>
> when the problem occurs in nginx log i found one request in
> success in access.log
>
> x.x.x.x - - [09/Jul/2017:09:18:33 +0200] "GET /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428 HTTP/1.1" 200 2 "-" "-"
>
> and an other one than generate this log in error.log :
>
> 2017/07/09 09:18:31 [error] 38111#38111: *4098505 upstream prematurely closed connection while reading response header from upstream, client: x.x.x.x, server: x.x.com, request: "GET /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428 HTTP/1.1", upstream: "http://172.16.0.11:9092/query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428", host: "x.x.com"
>
> on my backend i can see two request with the same uuid (the two
> succeed)
>
> {"pid":11424,"level":"info","message":"[API] AUTH1 /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428","timestamp":"2017-07-09 09:18:31.861Z"}
> {"pid":11424,"level":"info","message":"[API] AUTH1 /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428","timestamp":"2017-07-09 09:18:33.196Z"}
>
> The client is a node program so i'm sure that it sends only one
> request with the same uuid (no thread problem ;) the nginx serve
> as simple proxy (no load balancing)
[...]
> upstream api {
> keepalive 100;
> server 172.16.0.11:9092;
> }
[...]
> location / {
> proxy_next_upstream off;
> proxy_pass http://api;
[...]
> The backend is a simple node server.
>
> the problem occurs randomly, and it happens for sure on
> nginx/1.10.3 and nginx/1.13.2 on debian/jessie
>
> After some days of research, i found that if i remove the
> keepalive 100 from upstream configuration there is no longer the
> problem but i don't understand why ? Maybe somebody can explain
> me what could hapen ? maybe a misunderstanding about some
> configuration on keep alive ?
With keepalive connections, it is possible that server will close
the connection and the client will start sending a request a the
same time. This case is specifically outlined in RFC 2616,
https://tools.ietf.org/html/rfc2616#section-8.1.4
A client, server, or proxy MAY close the transport connection at any
time. For example, a client might have started to send a new request
at the same time that the server has decided to close the "idle"
connection. From the server's point of view, the connection is being
closed while it was idle, but from the client's point of view, a
request is in progress.
This means that clients, servers, and proxies MUST be able to recover
from asynchronous close events. Client software SHOULD reopen the
transport connection and retransmit the aborted sequence of requests
without user interaction so long as the request sequence is
idempotent (see section 9.1.2).
In such situation nginx will re-try the request, as suggested by
the standard. In nginx before 1.9.13 this happened even with
"proxy_next_upstream off".
Though in nginx 1.9.13 this code was rewritten as part of the
introduction of non-idempotent methods handling in
proxy_next_upstream. And in nginx 1.9.13+ re-trying the request
is not expected to happen unless "proxy_next_upstream error" is
also configured. As such, the exact configuration looks strange
combined with the versions you claim.
Could you please double-check that the behaviour you describe
appears when using nginx 1.13.2 and with proxy_next_upstream
switched off?
Just in case, I've tested it here using the following simple nginx
configuration:
upstream foo {
server 127.0.0.1:8081;
keepalive 10;
}
server {
listen 8080;
location / {
proxy_pass http://foo;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_next_upstream off;
}
}
server {
listen 8081;
if ($connection_requests ~ 2) {
return 444;
}
return 200 ok;
}
As expected, it returns 502 when using "proxy_next_upstream off"
and re-tries the request if proxy_next_upstream is not switched
off.
Note well that if duplicate requests introduce a problem, you may
want to reconsider logic of your backend. At least idempotent
requests can be freely retried, and your backend is expected to
handle this. (In practice this often happens even with
non-idempotent requests too.)
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx