Hi, We are observing below error in Nginx Error Log. At same time we see sudden spike in Active & Waiting Connection from 3k to 13-15k for a while which comes back to normal within a minute. We have also observed that some time during this sudden spike , server goes into Hung State with no helpful message in OS Kernel messages to debug or find out the reason for hung. Below is theby anish10dec - Other discussion
> I reread your initial post and some other things don't seem right to > me: > - "We see that out of two server on which load is high i.e around 5" > but later your write "Server is having 60 CPU Cores with 1.5 TB of > RAM" - a 5 load on 60 core cpu machine means that server has only ~8% > load which isn't very high or is it a typo? Load was mentionedby anish10dec - Nginx Mailing List - English
Actually, its not the case that More number of Clients are trying to get the content from One of Server as Server Throughput shows equal load on all interfaces of Server which is around 4 Gbps. So Do I expect , Writing will Increase with more number of Active Connections. Is it so that Nginx is not able to handle the load of as much connections and due to which requests is going into Writing Mby anish10dec - Nginx Mailing List - English
We are using Haproxy to distribute the load on the Servers. Load is ditributed on the basis of URI, with parameter set in haproxy config as "balance uri". This has been done to achieve maximum Cache Hit from the Server. Does high number of Writing is leading to increase in response time for delivering the content ?by anish10dec - Nginx Mailing List - English
On some of the Severs Waiting is increasing in uneven way like if we have 3 Set of Servers on all of them Active Connections is around 6K and Writing on two of the Server its around 500 -600 while on third ts 3000 . On this server response time is increasing in delivering the content. This happens even if the content is served from cache of nginx. Is any parameter in Nginx causing this, as on stby anish10dec - Nginx Mailing List - English
Thanks Maxim We enabled the upstream_request_time on both the server which shows response time less than a sec for Upstream request. It doesn't seems to be issue with Upstream Server . Even for the request which are HIT response time on the server on which "Writing" is more varies from 10 sec to 60 sec and even more while on other its less than 2 sec either its MISS or HIT. Onceby anish10dec - Nginx Mailing List - English
We are having two Nginx Server acting as Caching Server behind haproxy loadbalancer. We are observing a high load on one of the server though we see equal number of requests coming on the server from application per sec. We see that out of two server on which load is high i.e around 5 , response time /latency is high in delivering the content . On same server attached stats module screenshot shby anish10dec - Nginx Mailing List - English
Hi Everyone, We are using Nginx as Caching Server . As per Nginx Documentation by default nginx caches 200, 301 & 302 response code but we are observing that if Upstream server gives error 400 or 500 or 503, etc , response is getting cached and all other requests for same file becomes HIT. Though if we set proxy_cache_valid specifying response code ( like proxy_cache_valid 200 15m;by anish10dec - Nginx Mailing List - English
Hi Everyone, We are using Nginx as Caching Server . As per Nginx Documentation by default nginx caches 200, 301 & 302 response code but we are observing that if Upstream server gives error 400 or 500 or 503, etc , response is getting cached and all other requests for same file becomes HIT. Though if we set proxy_cache_valid specifying response code ( like proxy_cache_valid 200 15m; )by anish10dec - Other discussion
Hi Team, We have setup a Nginx Server for token authentication using secure_link_md5 Module as per below link http://nginx.org/en/docs/http/ngx_http_secure_link_module.html Please let me know how to proceed for two secret keys, say primary secret key and backup secret key so that if we are changing the primary key the current requests should get authenticated from backup key, provided we hby anish10dec - Other discussion