Hello Everyone,
I'm having some trouble appropriately establishing NGINX as a load balancer for a web application that I'm working on. I want to split up the traffic equally among three backend servers that are all running the same application, but I'm not sure if my setup is the best one or if I'm overlooking something crucial.
I want to set this up like this:
- NGINX is serving as both a load balancer and a reverse proxy.
- I want to divide traffic among three backend servers (Node.js apps).
- In order to provide fallback choices in the event that a server fails, I want to make sure that all traffic is spread equally.
This is the arrangement I have so far thought of:
http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
I have the following queries:
- Is this how the upstream block for load balancing should be defined, or should I use other options to increase fault tolerance, such weight or max_fails?
- How can I ensure that NGINX identifies a downtime of one of the backend servers by setting up appropriate health checks?
- In this type of setup, are there any best practices or optimisations I should adhere to for security and performance?
Any advice or recommendations would be very valued!
Thanks in advance.