Hello,
Each night we take our backend servers offline at specific times for maintenance. When the application servers restart they immediately begin answering HTTP requests from Nginx, but we want to keep them out of the upstream pool for about 30 minutes while they cache information from our data providers. To do this, I created iptables rules in cron on the application servers to block all communication from our Nginx reverse proxies and then delete the rule after 30 minutes.
However, Nginx still seems to think the server that is blocking it via iptables is online, adds it back to the upstream pool, then times it out and takes it back out. This causes our alerting system to go haywire throwing HTTP Read Timeouts and our clients to be unable to connect to our application.
Our upstream block is simple:
upstream app_servers {
ip_hash;
server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.13:8080 max_fails=3 fail_timeout=30s;
}
We're running Nginx 1.4.
Any ideas on why this would happen and ways we can avoid it?
Thanks.