I've also seen the issue when running plain nginx 1.9.11 sort like builting a new nginx, only the issue is that upstream closes cnxs, not nginx. Since I've discovered that our TCs had the Connector' keepAliveTimeout way low (10 msec), mistakenly thought the units were sec and not as actually msec. Since increasing this a 1000 fold everything seems much better now :) <Connector port=&qby stefws - Nginx Mailing List - English
stefws Wrote: ------------------------------------------------------- > Seems I'm not alone w/TC issues ;) missed the link: http://permalink.gmane.org/gmane.comp.web.haproxy/26860by stefws - Nginx Mailing List - English
Seems I'm not alone w/TC issues ;)by stefws - Nginx Mailing List - English
@B.R. You're right, seemed my upstream tomcat instances were RESETing cnx as reply something. So far I improved it a lot my altering a http connector keepAliveTimeout value from mistakenly expressed as sec when in fact it should be msec ;) When heavy load it still occurs but far less frequently, will dig deeper into tomcat trimming/tunningby stefws - Nginx Mailing List - English
My config btw: user imail; worker_processes auto; daemon on; master_process on; error_log logs/mos_error.tcp debug_tcp; error_log logs/mos_error.log; pid /opt/imail/nginx/logs/mos_nginx.pid; worker_rlimit_nofile 200000; worker_rlimit_core 500M; working_directory /opt/imail/nginx; events { worker_connections 25000; use epoll; multi_accept off; } http {by stefws - Nginx Mailing List - English
Nginx’ers, I trying to figure out why I'm randomly are seeing requests having issues with a nginx 1.7.4 when proxying to an upstream pool like: 2016/03/03 10:24:21 15905#0: *3252814 upstream prematurely closed connection while reading response header from upstream, client: 10.45.69.25, server: , request: "POST /<redacted url> HTTP/1.1", upstream: "http://10.45.69.28:8081/by stefws - Nginx Mailing List - English