I'm trying to better wrap my head around the keepalive functionality in the upstream module as when enabling keepalive, I'm seeing little to no performance benefits using the FOSS version of nginx.
My upstream block is:
upstream upstream_test_1 { server 1.1.1.1 max_fails=0; keepalive 50; }
With a proxy block of:
proxy_set_header X-Forwarded-For $IP;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://upstream_test_1;
1) How can I tell whether there are any connections currently in the keepalive pool for the upstream block? My origin server has keepalive enabled and I see that there are some connections in a keepalive state, however not the 50 defined and all seem to close much quicker than the keepalive timeout for the backend server. (I am using the Apache server status module to view this which is likely part of the problem)
2) Are upstream blocks shared across workers? So in this situation, would all 4 workers I have shared the same upstream keepalive pool or would each worker have it's own block of 50?
3) How is the length of the keepalive determined? The origin server's keepalive settings? Do the origin server's keepalive settings factor in at all?
4) If no traffic comes across this upstream for an extended period of time, will the connections be closed automatically or will they stay open infinitely?
5) Are the connections in keepalive shared across visitors to the proxy? For example, if I have three visitors to the proxy one after the other, would the expectation be that they use the same connection via keepalive or would a new connection be opened for each of them?
6) Is there any common level of performance benefit I should be seeing from enabling keepalive compared to just performing a proxy_pass directly to the origin server with no upstream block?
Thanks for any insight!