Assistance with Setting Up NGINX Load Balancing for a Website
September 13, 2024 11:56AM
Hello Everyone,

I'm having some trouble appropriately establishing NGINX as a load balancer for a web application that I'm working on. I want to split up the traffic equally among three backend servers that are all running the same application, but I'm not sure if my setup is the best one or if I'm overlooking something crucial.

I want to set this up like this:

- NGINX is serving as both a load balancer and a reverse proxy.
- I want to divide traffic among three backend servers (Node.js apps).
- In order to provide fallback choices in the event that a server fails, I want to make sure that all traffic is spread equally.

This is the arrangement I have so far thought of:

http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}

server {
listen 80;

location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}

I have the following queries:

- Is this how the upstream block for load balancing should be defined, or should I use other options to increase fault tolerance, such weight or max_fails?
- How can I ensure that NGINX identifies a downtime of one of the backend servers by setting up appropriate health checks?
- In this type of setup, are there any best practices or optimisations I should adhere to for security and performance?

Any advice or recommendations would be very valued!

Thanks in advance.
Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 137
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 500 on July 15, 2024
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready